Published April 11, 2019
| Submitted
Discussion Paper
Open
The iWildCam 2018 Challenge Dataset
Chicago
Abstract
Camera traps are a valuable tool for studying biodiversity, but research using this data is limited by the speed of human annotation. With the vast amounts of data now available it is imperative that we develop automatic solutions for annotating camera trap data in order to allow this research to scale. A promising approach is based on deep networks trained on human-annotated images. We provide a challenge dataset to explore whether such solutions generalize to novel locations, since systems that are trained once and may be deployed to operate automatically in new locations would be most useful.
Additional Information
We would like to thank the USGS and NPS for providing us with data. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1745301. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.Attached Files
Submitted - 1904.05986.pdf
Files
1904.05986.pdf
Files
(2.8 MB)
Name | Size | Download all |
---|---|---|
md5:31b2ac37c7987c048826540ce8813f2c
|
2.8 MB | Preview Download |
Additional details
- Eprint ID
- 103464
- Resolver ID
- CaltechAUTHORS:20200526-135628220
- NSF Graduate Research Fellowship
- DGE-1745301
- Created
-
2020-05-26Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field