Published September 2003
| Published
Book Section - Chapter
Open
Automated event detection in underwater video
Chicago
Abstract
We present an attentional selection system for processing video streams from remotely operated underwater vehicles (ROVs). The system identifies potentially interesting visual events spanning multiple frames based on low-level spatial properties of salient tokens, which are associated with those events and tracked over time. If video frames contain interesting events, they are labeled "interesting", otherwise they are labeled "boring". By marking the interesting events and omitting "boring" frames in the output stream, we augment the productivity of human video annotators, or, alternatively, provide input for a subsequent object classification algorithm.
Additional Information
We thank the David and Lucile Packard Foundation for their generosity in funding work at MBARI. D.W. and C.K. are funded by the NSF Center for Neuromorphic Systems Engineering at Caltech. We thank the NSF Research Coordination Network (RCN) Institute for Neuromorphic Engineering (INE) for support of collaborative travel. We thank the staff and management of the MBARI video lab for their support, help and interest in our project. This project was initiated at the 2002 Workshop for Neuromorphic Engineering in Telluride, Colorado.Attached Files
Published - Automated_event_detection_in_underwater_video.pdf
Files
Automated_event_detection_in_underwater_video.pdf
Files
(619.2 kB)
Name | Size | Download all |
---|---|---|
md5:d210fa4621aabffc95fa9f45c8647880
|
619.2 kB | Preview Download |
Additional details
- Eprint ID
- 120605
- Resolver ID
- CaltechAUTHORS:20230329-702589000.2
- David and Lucile Packard Foundation
- Center for Neuromorphic Systems Engineering, Caltech
- Created
-
2023-03-30Created from EPrint's datestamp field
- Updated
-
2023-04-26Created from EPrint's last_modified field
- Caltech groups
- Koch Laboratory (KLAB)