Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost
- Creators
- Zhao, Qi
-
Koch, Christof
Abstract
To predict where subjects look under natural viewing conditions, biologically inspired saliency models decompose visual input into a set of feature maps across spatial scales. The output of these feature maps are summed to yield the final saliency map. We studied the integration of bottom-up feature maps across multiple spatial scales by using eye movement data from four recent eye tracking datasets. We use AdaBoost as the central computational module that takes into account feature selection, thresholding, weight assignment, and integration in a principled and nonlinear learning framework. By combining the output of feature maps via a series of nonlinear classifiers, the new model consistently predicts eye movements better than any of its competitors.
Additional Information
© 2012 ARVO. Received May 25, 2011; Accepted April 8, 2012. The authors would like to thank Jonathan Harel for helpful discussions. This research was supported by the NeoVision program at DARPA, by the ONR, by the G. Harold & Leila Y. Mathers Charitable Foundation, and by the WCU (World Class University) program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea (R31-10008).Attached Files
Published - Zhao2012p19095J_Vis.pdf
Files
Name | Size | Download all |
---|---|---|
md5:4e19f28458950eb9bf51d090d463fa50
|
1.1 MB | Preview Download |
Additional details
- Eprint ID
- 33120
- Resolver ID
- CaltechAUTHORS:20120813-104327256
- Defense Advanced Research Projects Agency (DARPA)
- Office of Naval Research (ONR)
- G. Harold and Leila Y. Mathers Charitable Foundation
- National Research Foundation of Korea
- R31-10008
- Created
-
2012-08-13Created from EPrint's datestamp field
- Updated
-
2023-05-01Created from EPrint's last_modified field
- Caltech groups
- Koch Laboratory (KLAB)