Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 2017 | Published
Conference Paper Open

Interpretable Machine Teaching via Feature Feedback

Abstract

A student's ability to learn a new concept can be greatly improved by providing them with clear and easy to understand explanations from a knowledgeable teacher. However, many existing approaches for machine teaching only give a limited amount of feedback to the student. For example, in the case of learning visual categories, this feedback could be the class label of the object present in the image. Instead, we propose a teaching framework that includes both instance-level labels as well as explanations in the form of feature-level feedback to the human learners. For image categorization, our feature-level feedback consists of a highlighted part or region in an image that explains the class label. We perform experiments on real human participants and show that learners that are taught with feature-level feedback perform better at test time compared to existing methods.

Additional Information

The authors thank Google for supporting the Visipedia project, and kind donations from Northrop Grumman, Bloomberg, and AWS Research Credits. Yuxin Chen was supported in part by a Swiss NSF Mobility Postdoctoral Fellowship.

Attached Files

Published - nips17-teaching_paper-5.pdf

Files

nips17-teaching_paper-5.pdf
Files (1.4 MB)
Name Size Download all
md5:f3872553c7fe238ba2cf732574c6a322
1.4 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023