Presence-Only Geographical Priors for Fine-Grained Image Classification
- Creators
-
Mac Aodha, Oisin
- Cole, Elijah
-
Perona, Pietro
Abstract
Appearance information alone is often not sufficient to accurately differentiate between fine-grained visual categories. Human experts make use of additional cues such as where, and when, a given image was taken in order to inform their final decision. This contextual information is readily available in many online image collections but has been underutilized by existing image classifiers that focus solely on making predictions based on the image contents. We propose an efficient spatio-temporal prior, that when conditioned on a geographical location and time, estimates the probability that a given object category occurs at that location. Our prior is trained from presence-only observation data and jointly models object categories, their spatio-temporal distributions, and photographer biases. Experiments performed on multiple challenging image classification datasets show that combining our prior with the predictions from image classifiers results in a large improvement in final classification performance.
Additional Information
© 2019 IEEE. This work was supported by a Google Focused Research Award and an NSF Graduate Research Fellowship (Grant No. DGE1745301). We thank Grant Van Horn and Serge Belongie for helpful discussions, along with NVIDIA and AWS for their kind donations.Attached Files
Submitted - 1906.05272.pdf
Files
Name | Size | Download all |
---|---|---|
md5:2aec438184cc6b5284f9829ce3e06937
|
2.9 MB | Preview Download |
Additional details
- Eprint ID
- 101735
- DOI
- 10.1109/iccv.2019.00969
- Resolver ID
- CaltechAUTHORS:20200306-091556702
- NSF Graduate Research Fellowship
- DGE-1745301
- NVIDIA
- Amazon Web Services
- Created
-
2020-03-06Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field