Search the USF Web site Site Map USF home page

Home

Center Identification Number:  77807   

Project Title:  Evaluation of Smart Video for Transit Event Detection  

Co-Principal Investigators:

Deborah Sapper
Senior Research Associate
Phone: (813) 974-1446
E-mail: sapper@cutr.usf.edu


Professor Dmitry B. Goldgof
Computer Science and Engineering
Phone: (813) 974-4055
E-mail: goldgof@cse.usf.edu

Institution:                             

Center for Urban Transportation Research
University of South Florida
Fax: 813-974-5168

External Project Contact:     

James Mike Johnson
Administrator-Transit Operations
Florida Department of Transportation Office of Public Transportation
(850) 414-4525

 

I.  Project Objective/Problem Statement

Transit agencies are increasingly using video cameras to fight crime and terrorism. As the volume of video data increases, the existing digital video surveillance systems provides the infrastructure only to capture, store and distribute video, while leaving the task of threat detection exclusively to human operators. Studies were done by Sandia National Laboratories for the U.S. Department of Energy to test the effectiveness of an individual whose task was to sit in front of a video monitor(s) for several hours a day and watch for particular events. The studies showed that even when assigned to a person, who is dedicated and well-intentioned, this method of using technology will not support an effective security system. After only 20 minutes of watching and evaluating monitor screens, the attention of most individuals has degenerated to well below acceptable levels. Monitoring video screens is boring, mesmerizing, and has no intellectually engaging stimuli. To address this problem, New Jersey Transit has connected over 1,400 of their cameras to computers that can automatically detect suspicious activity, using a complex algorithm. Any abnormality that differentiates from the algorithm will set off an alarm or a pager, or give a call to whoever is responsible for that camera.

Other types of smart video surveillance that can be used by transit agencies include:

  • The ability to preempt incidents - through real time alarms for suspicious behaviors

  • Enhanced forensic capabilities - through content based video retrieval

  • Situational awareness - through joint awareness of location, identity and activity of objects in the monitored space.

The cost-effectiveness of these systems to the transit agencies will depend on independent verification of the systems’ performance against the task(s) deemed most important by the transit agencies for the application. This proposal would determine task definitions, devise annotation guidelines, establish scoring metrics and implement the metrics in the scoring software. Commercial anomaly detection products will be studied (e.g., ActiveEye,http://www.activeye.com/ and Honeywell Rapid Eye, http://www.honeywellvideo.com/products/recorders/pc/66328.html), pilot system(s) selected and full cost-effectiveness evaluation strategy developed.

II.  Objectives/Tasks

The objective of this research project will be to study and develop an evaluation framework for commercial anomaly detection systems. The specific objectives of this work will be to: • Study various commercial anomaly detection systems; • Develop an evaluation framework


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 1 Overall View for the Evaluation Framework. Figure 1 shows the overall view of the proposed evaluation framework. Various commercial systems will be studied and the relevant ones will be selected (described in Task 2). The raw videos are analyzed thoroughly to include events of interest (for example: detect unattended baggage in the concourse): more details can be found in the description of Task 6. The tool to mark these specific events are identified (we already have shortlisted a couple of them but further testing needs to be performed). An expert (video analyst) will then mark the events of interest in the video using the selected tool. This “marked” video will act as the reference (ground truth) against which the system output will be compared against. The raw video is also fed to the commercial system which based on its internal algorithm estimates where these anomaly events have occurred. The evaluation software takes the system output, the expert reference and based on the performance metrics will give out the scores. This process is entirely automated and is repeated in a batch process over all the video sequences in the dataset. The protocol is a technical document which formally defines all aspects undertaken in the evaluation. Finally, all these performance scores are subjected to statistical analyses and a report generated based on the findings. To achieve these objectives, the following research tasks have been identified:

Task 1:  Project Management

Project review of drafts reports, tech memos and final report.

Task 2:  Study Various Anomaly Detection Systems

The investigators will survey about twenty-five transit systems to determine if they are using of anomaly detection devices, the type of system being used in transit and how the agencies are using them. From the information collected from the survey the investigators will review and summarize commercially available anomaly detection systems. These systems will then have to be accessed of their capabilities by looking at any demos, brochures and also download and tested on a sample video. There are systems which have a broad claim of anomaly and some which are more specific. Accomplishments at the end of this task: We will select commercial anomaly detection systems that are relevant for our evaluation. We plan to look at all the known commercial systems.

Task 3: Define the Evaluation Task

A formal task definition is required for any evaluation task. The specific goal has to be formally defined. Also, to be included are how the marking of the event will be performed (annotation) will be performed and what tags (basically a metadata/label identifying specific object properties) will be used for scoring. The system output tags must also be generated according to the rules of the annotations and also formatted accordingly (XML: Extensible Markup Language formatting makes it easy to parse and identify tags). The task definition will also specifically identify what annotation attributes will be used to measure the performance. Accomplishments at the end of this task: The evaluation tasks are formally defined. This is the first part in setting the protocol. Doing this formalizes the internal details which will make it clear (without confusion) what anomaly needs to be identified. 

Task 4: Design the Performance Metric

Performance metrics (indicators) can be a single number which shows how an algorithm performs broadly for the specific task at hand. Numerous metrics which measure the individual aspect of the algorithm also have to be defined to analyze the success and breakdown points the systems have. For an anomaly detection system it is necessary to know what the failure conditions are (if any), where they occur and also estimate how frequently these occur in real world conditions. For example: a baggage left intentionally at a bus station; will be detected as an anomaly by using a series of events which have to occur sequentially (we can call these sub-events). A system which correctly indicates all these will eventually detect the particular anomaly. Given real world conditions, there can be a series of missed events from the systems and these will be penalized. Partial credit is given of course when the system identifies a sub-event correctly. Accomplishments at the end of this task: Equations for the performance metrics.

Task 5:  Study and Identify the Annotation Tool

Manual annotation is required to mark the specific frames within the video to indicate when an event starts and ends, or when an event has occurred. Some annotation tools also provide individual area marking within a frame. This is used to measure the spatial aspect of the system output we are evaluating. Tools which can handle user-specified tags which can be formatted in XML makes it easier to port and parse easier. To get an objective evaluation, we need to create a “ground truth” which the system response can be compared. This task is to identify the specific tool to mark the anomaly event that occurs in the video as described in Task 7. This is similar to expert radiologists marking the cancer regions in a mammography scan: which is used as the ground truth so that automated comparison can be made in the system response. Accomplishments at the end of this task: The annotation tool to be used will be identified.

Task 6:  Study and Identify Specific Raw Videos to be Used

To truly evaluate systems, we will need to test them on unseen raw video data. Specific videos will need to be browsed and identified (a time consuming process). Video feeds from different data sources if available will be beneficial. These will have different viewpoints, illumination changes, geometric configurations, etc. Accomplishments at the end of this task: The raw videos rich in event data are identified as test data (typically 5-10 or more depending on availability). 

Task 7:  Manually Annotate Test Videos to generate Ground Truth

Once the test set is ready, we will have to manually annotate these based on specific annotation rules (guidelines). These rules should be strictly adhered to avoid any discrepancy and also a quality check methodology will be in place. The quality checks are crucial as this is used as the gold standard to measure the algorithm’s performance. Accomplishments at the end of this task: Human Expert marked video files of all test data (typically 5-10 clips or more depending on availability). 

Task 8:  Develop the Scoring Tool

The scoring tool is a software package which integrates the ground truth, metrics and system output. It compares the system output with the ground truth and based on the metric design, it produces the performance score. This tool will also have additional capabilities including debugging, quality control for annotations and varying degree of options to include/exclude detailed scores. Accomplishments at the end of this task: The scoring tool package will be ready to use.

Task 8:  Final Report

The final task of the research will summarize the results of the previous tasks in order to develop a report on the “Evaluation of Smart Video for Transit Event Detection.” The report will be designed in a clear, concise summary format that will facilitate easy reading and application by public transit provider professionals. A Power-Point presentation will also be developed to share the project findings with other relevant organizations. CUTR will coordinate the Florida DOT, ensure the scope and activities are consistent with the Florida public transportation industry’s goals and objectives. It is understood that travel by CUTR, within the budget of the project, will be necessary for collecting information., Particular locations where CUTR staff may travel within the State of Florida include Tallahassee, Tampa, Ft. Lauderdale, Miami-Dade, and Jacksonville and out-state may include New Jersey and Houston.

III. Deliverables

Deliverables for this project will include the following:

Quarterly Progress Reports - Quarterly progress reports will be provided to the FDOT Project Manager and the Research Office. Reports will include the following sections: 1. Contract number, work order number, and title 2. Work done during the quarter 3. Work to be done in the following quarter 4. Requested modifications to scope, budget, or schedule, as appropriate 5. An updated project progress schedule

Technical Memorandum 1 - Technical Memorandum 1 will summarize the findings of the first four tasks: study specific anomaly detection systems; evaluation task definition, performance metrics design, development and administration of study; and study and identify annotation tool.

Technical Memorandum 2 - Technical Memorandum 2, to be submitted at the end of Task 9, will detail the case studies and provide a concise recap of all the case studies, compare the findings, and identify success and failure conditions in the systems’ performance in detection of anomaly events in the video.

Draft Final Report

Following the completion of Task 9 and Technical Memorandum 2, a Draft Final Report will be submitted for review. The draft final report will be edited for grammar, clarity, organization, and readability prior to submission to the Department for technical approval. The editor providing the review will sign a cover sheet attesting to such review prior to submission. It is expected that a well-written, high-quality report will be submitted. It is understood that reports failing to meet this requirement will summarily be rejected. The only changes allowable between the final draft and the final report will be those changes requested by the Project Manager and the Research Center.

Final Report

A minimum of 13 copies of the final report will be delivered to: Florida DOT, The Research Center, 605 Suwannee Street, MS 30,Tallahassee, FL 32399-0450. In addition, a camera-ready unbound original and an electronic copy in MS Word format on CD will be submitted no later than the end date of the RWPO. One electronic copy in MS Word format of a Summary of the Final Report will also be provided to FDOT to include the following four sections: Background, Objectives and Supporting Tasks, Finding and Conclusions, and the Benefit of the Project. The Summary shall be a separate document and should be approximately 500 words in length. All Final Reports shall contain a completed Technical Report Documentation Form #F.1700.7, immediately after the title page. All Final Reports published shall contain a page after the Report Documentation Form that states the following: 1. The opinions, findings and conclusions expressed in this publication are those of the authors and not necessarily those of the State of Florida Department of Transportation, or the U.S. Department of Transportation. 2. Prepared in cooperation with the State of Florida Department of Transportation and the U.S. Department of Transportation. All Final Reports should be bound with a front and back cover that is acceptable to the Department. NOTE: All written deliverables will be submitted in electronic format to the Project Manager and the Research Center for processing. Electronic reports will be e-mailed to Sandra Bell at sandra.bell@dot.state.fl.us. Hard copies will be sent to the following address: Sandra Bell, Research Contracts Administrator Florida Department of Transportation 605 Suwannee Street, MS 30 Tallahassee, FL 32399-0450

IV.  Project Schedule

 

V.  Project Budget

 

Budget Categories

State

Center Director Salary

$1,318

Faculty Salaries

$45,021

Admin. Staff Salaries

 

Other Staff Salaries

 

Student Salaries

$48,941

Staff Benefits

$14,763

Total Salaries and Benefits

$110,043

Scholarships

$10,000

Permanent Equipment

$6,500

Expendable Property/Supplies

$275

Domestic Travel

$5,000

Foreign Travel

 

Other Direct Costs

 

Total Direct Costs

$131,818

Indirect Costs

$13,182

Total Costs

$145,000

Notes: This budget does not reflect any federal participation. The project team will include faculty, students, and secretarial and other support staff who will work directly on the project and whose costs are reflected in the direct costs of the project as listed above. Budget requests includes salaries for clerical and administrative staff, postage, telephone calls, office supplies, general purpose software, subscriptions, and/or memberships.

VI. Equipment

Computers and software will be purchased to test video surveillance systems for this project.

VII. Travel

On-site interviews of agencies using anomaly detection systems and testing of systems will occur at up to 8 agencies (6) - in-state and out - of state (2) Specific details for the project trips (i.e., destinations and travelers) have not been finalized. Those determinations will be made upon the substantial completion of the literature review. Prior to making any trips, the Principal Investigators will contact the FDOT Research Center to provide details on the researcher traveling and their destinations. Pre-approvals will be provided through email correspondence. In the event all of the trips are not necessary and/or some of the money budgeted for travel is not expended, such surplus will be reallocated to salaries and benefits with the approval of the FDOT Project Manager.

VIII. Student Involvement

This research project intends to use two Computer Science and Engineering Graduate students and one undergraduate. The students will assist with tasks 2 – 9. These task include identifying other anomaly detection systems, defining and designing performance indicators, identifying the annotation tools, viewing raw video, testing for videos for ground truth identification and develop scoring tools.


National Center for Transit Research · at the Center For Urban Transportation Research · University of South Florida · 4202 E. Fowler Ave., CUT100 · Tampa, FL 33620-5375 · (813) 974-3120 · (813) 974-5168 · www.nctr.usf.edu · Comments: webmaster@cutr.eng.usf.edu