Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published January 2, 2019 | Supplemental Material + Published
Journal Article Open

Brain-inspired automated visual object discovery and detection

Abstract

Despite significant recent progress, machine vision systems lag considerably behind their biological counterparts in performance, scalability, and robustness. A distinctive hallmark of the brain is its ability to automatically discover and model objects, at multiscale resolutions, from repeated exposures to unlabeled contextual data and then to be able to robustly detect the learned objects under various nonideal circumstances, such as partial occlusion and different view angles. Replication of such capabilities in a machine would require three key ingredients: (i) access to large-scale perceptual data of the kind that humans experience, (ii) flexible representations of objects, and (iii) an efficient unsupervised learning algorithm. The Internet fortunately provides unprecedented access to vast amounts of visual data. This paper leverages the availability of such data to develop a scalable framework for unsupervised learning of object prototypes—brain-inspired flexible, scale, and shift invariant representations of deformable objects (e.g., humans, motorcycles, cars, airplanes) comprised of parts, their different configurations and views, and their spatial relationships. Computationally, the object prototypes are represented as geometric associative networks using probabilistic constructs such as Markov random fields. We apply our framework to various datasets and show that our approach is computationally scalable and can construct accurate and operational part-aware object models much more efficiently than in much of the recent computer vision literature. We also present efficient algorithms for detection and localization in new scenes of objects and their partial views.

Additional Information

© 2018 National Academy of Sciences. Published under the PNAS license. Contributed by Thomas Kailath, April 23, 2018 (sent for review February 12, 2018; reviewed by Rama Chellappa, Shree Nayar, and Erik Sudderth). PNAS published ahead of print December 17, 2018. The authors thank Prof. Lieven Vandenberghe for his input on the optimization formulations used in the paper and the referees for helpful suggestions and especially for pointing us to relevant prior work. Author contributions: L.C., S.S., T.K., and V.R. designed research; L.C., S.S., T.K., and V.R. performed research; L.C. and V.R. analyzed data; and L.C., T.K., and V.R. wrote the paper. Reviewers: R.C., University of Maryland, College Park; S.N., Columbia University; and E.K., University of California, Irvine. The authors declare no conflict of interest. Data deposition: The in-house dataset used in the paper is shared publicly at https://www.ee.ucla.edu/wp-content/uploads/ee/cele_images_lite.zip. This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1802103115/-/DCSupplemental.

Attached Files

Published - 96.full.pdf

Supplemental Material - pnas.1802103115.sapp.pdf

Files

96.full.pdf
Files (11.5 MB)
Name Size Download all
md5:2e61c1cbf2186d8b820235cf19b7bb78
999.0 kB Preview Download
md5:0730509d542e9d480a755f6b2c2d464d
10.5 MB Preview Download

Additional details

Created:
August 22, 2023
Modified:
October 19, 2023