Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 2011 | public
Book Section - Chapter

Learning visual saliency

Abstract

Inspired by the primate visual system, computational saliency models decompose the visual input into a set of feature maps across spatial scales. In the standard approach, the feature maps of the pre-specified channels are summed to yield the final saliency map. We study the feature integration problem and propose two improved strategies: first, we learn a weighted linear combination of features using the constraint linear regression algorithm. We further propose an AdaBoost based algorithm to approach the feature selection, thresholding, weight assignment, and nonlinear integration in a single principled framework. Extensive quantitative evaluations of the new models are conducted using four public datasets, and improvements on model predictability power are shown.

Additional Information

© 2011 IEEE. The authors would like to thank Jonathan Harel for helpful discussions on weighting different features. This research was supported by the NeoVision program at DARPA, by the ONR, the Mathers foundation, and the WCU (World Class University) program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (R31-10008).

Additional details

Created:
September 15, 2023
Modified:
October 23, 2023