Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 11, 2019 | Submitted
Report Open

Inverse Reinforcement Learning in Large State Spaces via Function Approximation

Abstract

This paper introduces a new method for inverse reinforcement learning in large-scale and high-dimensional state spaces. To avoid solving the computationally expensive reinforcement learning problems in reward learning, we propose a function approximation method to ensure that the Bellman Optimality Equation always holds, and then estimate a function to maximize the likelihood of the observed motion. The time complexity of the proposed method is linearly proportional to the cardinality of the action set, thus it can handle large state spaces efficiently. We test the proposed method in a simulated environment, and show that it is more accurate than existing methods and significantly better in scalability. We also show that the proposed method can extend many existing methods to high-dimensional state spaces. We then apply the method to evaluating the effect of rehabilitative stimulations on patients with spinal cord injuries based on the observed patient motions.

Additional Information

This work was supported by the National Institutes of Health, NIBIB.

Attached Files

Submitted - 1707.09394.pdf

Files

1707.09394.pdf
Files (436.4 kB)
Name Size Download all
md5:fa675355aa2c0838c62fb9d5ac5ef65f
436.4 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023