A walk through the web's video clips
Abstract
Approximately 10^5 video clips are posted every day on the Web. The popularity of Web-based video databases poses a number of challenges to machine vision scientists: how do we organize, index and search such large wealth of data? Content-based video search and classification have been proposed in the literature and applied successfully to analyzing movies, TV broadcasts and lab-made videos. We explore the performance of some of these algorithms on a large data-set of approximately 3000 videos. We collected our data-set directly from the Web minimizing bias for content or quality, way so as to have a faithful representation of the statistics of this medium. We find that the algorithms that we have come to trust do not work well on video clips, because their quality is lower and their subject is more varied. We will make the data publicly available to encourage further research.
Additional Information
© 2008 IEEE.Attached Files
Files
Name | Size | Download all |
---|---|---|
md5:e8d304c544fa43d184823e78befc780f
|
1.2 MB | Preview Download |
Additional details
- Eprint ID
- 18259
- Resolver ID
- CaltechAUTHORS:20100512-132602796
- Created
-
2010-06-03Created from EPrint's datestamp field
- Updated
-
2021-11-08Created from EPrint's last_modified field