Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 2017 | public
Book Section - Chapter

Gradient-based inverse risk-sensitive reinforcement learning

Abstract

We address the problem of inverse reinforcement learning in Markov decision processes where the agent is risksensitive. In particular, we model risk-sensitivity in a reinforcement learning framework by making use of models of human decision-making having their origins in behavioral psychology and economics. We propose a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on the observed behavior. We demonstrate the performance of the proposed technique on two examples, the first of which is the canonical Grid World example and the second of which is an MDP modeling passengers' decisions regarding ride-sharing. In the latter, we use pricing and travel time data from a ride-sharing company to construct the transition probabilities and rewards of the MDP.

Additional Information

© 2017 IEEE. This work is supported by NSF CRII Award CNS-1656873, NSF US-Ignite Award CNS-1646912, NSF FORCES (Foundations Of Resilient CybEr-physical Systems) Award CNS-1238959, CNS-1238962, CNS- 1239054, CNS-1239166.

Additional details

Created:
August 19, 2023
Modified:
October 23, 2023