Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 2008 | Accepted Version
Book Section - Chapter Open

Multiple Component Learning for Object Detection

Abstract

Object detection is one of the key problems in computer vision. In the last decade, discriminative learning approaches have proven effective in detecting rigid objects, achieving very low false positives rates. The field has also seen a resurgence of part-based recognition methods, with impressive results on highly articulated, diverse object categories. In this paper we propose a discriminative learning approach for detection that is inspired by part-based recognition approaches. Our method, Multiple Component Learning (MCL), automatically learns individual component classifiers and combines these into an overall classifier. Unlike previous methods, which rely on either fairly restricted part models or labeled part data, MCL learns powerful component classifiers in a weakly supervised manner, where object labels are provided but part labels are not. The basis Of MCL lies in learning a set classifier; we achieve this by combining boosting with weakly supervised learning, specifically the Multiple Instance Learning framework (MIL). MCL is general, and we demonstrate results on a range of data from computer audition and computer vision. In particular, MCL outperforms all existing methods on the challenging INRIA pedestrian detection dataset, and unlike methods that are not part-based, MCL is quite robust to occlusions.

Additional Information

© 2008 Springer. PD and BB were funded by NSF IGERT Grant DGE-0333451. SB was funded by NSF Career Grant #0448615 and the Alfred P. Sloan Research Fellowship. ZT was funded by NIH Grant U54RR021813 entitled Center for Computational Biology.

Attached Files

Accepted Version - DollarEtAlECCV08mcl.pdf

Files

DollarEtAlECCV08mcl.pdf
Files (657.4 kB)
Name Size Download all
md5:00d6c90e8515b5afe9c33968fee6253e
657.4 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
January 13, 2024