Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 2019 | Accepted Version
Book Section - Chapter Open

On the Utility of Model Learning in HRI

Abstract

Fundamental to robotics is the debate between model-based and model-free learning: should the robot build an explicit model of the world, or learn a policy directly? In the context of HRI, part of the world to be modeled is the human. One option is for the robot to treat the human as a black box and learn a policy for how they act directly. But it can also model the human as an agent, and rely on a "theory of mind" to guide or bias the learning (grey box). We contribute a characterization of the performance of these methods under the optimistic case of having an ideal theory of mind, as well as under different scenarios in which the assumptions behind the robot's theory of mind for the human are wrong, as they inevitably will be in practice. We find that there is a significant sample complexity advantage to theory of mind methods and that they are more robust to covariate shift, but that when enough interaction data is available, black box approaches eventually dominate.

Additional Information

© 2019 IEEE. We thank the members of the InterACT Lab at UC Berkeley. In particular, we are grateful for Kush Bhatia's feedback on building human simulators and Eli Bronstein's assistance on the black-box model-based component of this work. This work is partially supported by NVIDIA and the Caltech Arjun Bansal and Ria Langheim Summer Undergraduate Research Fellowship.

Attached Files

Accepted Version - 1901.01291.pdf

Files

1901.01291.pdf
Files (2.2 MB)
Name Size Download all
md5:9a54099ebb3c2dea99c018036c9b3c1f
2.2 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023