Compactly Restrictable Metric Policy Optimization Problems
Abstract
We study policy optimization problems for deterministic Markov decision processes (MDPs) with metric state and action spaces, which we refer to as Metric Policy Optimization Problems (MPOPs). Our goal is to establish theoretical results on the well-posedness of MPOPs that can characterize practically relevant continuous control systems. To do so, we define a special class of MPOPs called Compactly Restrictable MPOPs (CR-MPOPs), which are flexible enough to capture the complex behavior of robotic systems but specific enough to admit solutions using dynamic programming methods such as value iteration. We show how to arrive at CR-MPOPs using forward-invariance. We further show that our theoretical results on CR-MPOPs can be used to characterize feedback linearizable control affine systems.
Additional Information
Submitted May 15th, 2021. Resubmitted July 6th, 2022. This work was supported in part by DARPA and Beyond Limits. Victor D. Dorobantu was also supported in part by a Kortschak Fellowship.Attached Files
Submitted - 2207.05850.pdf
Files
Name | Size | Download all |
---|---|---|
md5:da14873140c3486f18db32f6cad96276
|
2.7 MB | Preview Download |
Additional details
- Eprint ID
- 115568
- Resolver ID
- CaltechAUTHORS:20220714-212414777
- Defense Advanced Research Projects Agency (DARPA)
- Beyond Limits
- Kortschak Scholars Program
- Created
-
2022-07-15Created from EPrint's datestamp field
- Updated
-
2023-07-10Created from EPrint's last_modified field
- Caltech groups
- Center for Autonomous Systems and Technologies (CAST)