Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 11, 2019 | Submitted
Report Open

A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning

Abstract

This works handles the inverse reinforcement learning problem in high-dimensional state spaces, which relies on an efficient solution of model-based high-dimensional reinforcement learning problems. To solve the computationally expensive reinforcement learning problems, we propose a function approximation method to ensure that the Bellman Optimality Equation always holds, and then estimate a function based on the observed human actions for inverse reinforcement learning problems. The time complexity of the proposed method is linearly proportional to the cardinality of the action set, thus it can handle high-dimensional even continuous state spaces efficiently. We test the proposed method in a simulated environment to show its accuracy, and three clinical tasks to show how it can be used to evaluate a doctor's proficiency.

Additional Information

This work was supported by the National Institutes of Health, NIBIB.

Attached Files

Submitted - 1708.07738.pdf

Files

1708.07738.pdf
Files (262.3 kB)
Name Size Download all
md5:761dcbe3bae36a1171bcfa7c5c9dc87e
262.3 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023