Teaching categories to human learners with visual explanations
Abstract
We study the problem of computer-assisted teaching with explanations. Conventional approaches for machine teaching typically only provide feedback at the instance level e.g., the category or label of the instance. However, it is intuitive that clear explanations from a knowledgeable teacher can significantly improve a student's ability to learn a new concept. To address these existing limitations, we propose a teaching framework that provides interpretable explanations as feedback and models how the learner incorporates this additional information. In the case of images, we show that we can automatically generate explanations that highlight the parts of the image that are responsible for the class label. Experiments on human learners illustrate that, on average, participants achieve better test set performance on challenging categorization tasks when taught with our interpretable approach compared to existing methods.
Additional Information
© 2018 IEEE. We would like to thank Google for their gift to the Visipedia project, AWS Research Credits, Bloomberg, Northrop Grumman, and the Swiss NSF for their Early Mobility Postdoctoral Fellowship. Thanks also to Kareem Moussa for providing the OCT dataset and to Kun ho Kim for helping generate crowd embeddings.Attached Files
Accepted Version - 1802.06924.pdf
Files
Name | Size | Download all |
---|---|---|
md5:bcfcf113c9eabf86835a40bcdc4c6e3a
|
7.4 MB | Preview Download |
Additional details
- Eprint ID
- 87078
- DOI
- 10.1109/CVPR.2018.00402
- Resolver ID
- CaltechAUTHORS:20180613-143607692
- Amazon Web Services
- Bloomberg Data Science
- Northrop Grumman
- Swiss National Science Foundation (SNSF)
- Created
-
2018-06-13Created from EPrint's datestamp field
- Updated
-
2021-11-15Created from EPrint's last_modified field