Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 2020 | public
Book Section - Chapter

Iterative Amortized Policy Optimization

Abstract

Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high-value actions. From the variational inference perspective on RL, policy networks, when used with entropy or KL regularization, are a form of amortized optimization, optimizing network parameters rather than the policy distributions directly. However, direct amortized mappings can yield suboptimal policy estimates and restricted distributions, limiting performance and exploration. Given this perspective, we consider the more flexible class of iterative amortized optimizers. We demonstrate that the resulting technique, iterative amortized policy optimization, yields performance improvements over direct amortization on benchmark continuous control tasks.

Additional Information

JM acknowledges Scott Fujimoto for helpful discussions. This work was funded in part by NSF #1918839 and Beyond Limits. JM is currently employed by Google DeepMind. The authors declare no other competing interests related to this work.

Additional details

Created:
August 20, 2023
Modified:
March 27, 2024