Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 15, 2012 | Published
Journal Article Open

Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost

Abstract

To predict where subjects look under natural viewing conditions, biologically inspired saliency models decompose visual input into a set of feature maps across spatial scales. The output of these feature maps are summed to yield the final saliency map. We studied the integration of bottom-up feature maps across multiple spatial scales by using eye movement data from four recent eye tracking datasets. We use AdaBoost as the central computational module that takes into account feature selection, thresholding, weight assignment, and integration in a principled and nonlinear learning framework. By combining the output of feature maps via a series of nonlinear classifiers, the new model consistently predicts eye movements better than any of its competitors.

Additional Information

© 2012 ARVO. Received May 25, 2011; Accepted April 8, 2012. The authors would like to thank Jonathan Harel for helpful discussions. This research was supported by the NeoVision program at DARPA, by the ONR, by the G. Harold & Leila Y. Mathers Charitable Foundation, and by the WCU (World Class University) program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea (R31-10008).

Attached Files

Published - Zhao2012p19095J_Vis.pdf

Files

Zhao2012p19095J_Vis.pdf
Files (1.1 MB)
Name Size Download all
md5:4e19f28458950eb9bf51d090d463fa50
1.1 MB Preview Download

Additional details

Created:
September 14, 2023
Modified:
October 23, 2023