Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 1, 2018 | Submitted + Published + Supplemental Material
Journal Article Open

Hierarchical Imitation and Reinforcement Learning

Abstract

We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma's Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.

Additional Information

© 2018 by the author(s). The majority of this work was done while HML was an intern at Microsoft Research. HML is also supported in part by an Amazon AI Fellowship.

Attached Files

Published - le18a.pdf

Submitted - 1803.00590.pdf

Supplemental Material - le18a-supp.pdf

Files

le18a.pdf
Files (2.6 MB)
Name Size Download all
md5:7177da126ebbb535dbd26385d6aafd36
637.9 kB Preview Download
md5:ce52e1d8c44592976f2df46b7037676b
1.3 MB Preview Download
md5:00c0f896b71dc3eb0d52c1bcdbca942f
674.9 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023