Trust Region Policy Optimization for POMDPs
Abstract
We propose Generalized Trust Region Policy Optimization (GTRPO), a policy gradient Reinforcement Learning (RL) algorithm for both Markov decision processes (MDP) and Partially Observable Markov Decision Processes (POMDP). Policy gradient is a class of model-free RL methods. Previous policy gradient methods are guaranteed to converge only when the underlying model is an MDP and the policy is run for an infinite horizon. We relax these assumptions to episodic settings and to partially observable models with memory-less policies. For the latter class, GTRPO uses a variant of the Q-function with only three consecutive observations for each policy updates, and hence, is computationally efficient. We theoretically show that the policy updates in GTRPO monotonically improve the expected cumulative return and hence, GTRPO has convergence guarantees.
Additional Information
K. Azizzadenesheli is supported in part by NSF Career Award CCF-1254106 and Air Force FA9550-15-1-0221. A. Anandkumar is supported in part by Microsoft Faculty Fellowship, Google Faculty Research Award, Adobe Grant, NSF Career Award CCF-1254106, and AFOSR YIP FA9550-15-1-0221.Attached Files
Submitted - 1810.07900.pdf
Files
Name | Size | Download all |
---|---|---|
md5:dd1ce9341e3e28897c0fb7bc5ead724a
|
6.9 MB | Preview Download |
Additional details
- Eprint ID
- 94179
- Resolver ID
- CaltechAUTHORS:20190327-085807408
- NSF
- CCF-1254106
- Air Force Office of Scientific Research (AFOSR)
- FA9550-15-1-0221
- Microsoft Faculty Fellowship
- Google Faculty Research Award
- Adobe
- Created
-
2019-03-28Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field