Robust Control of Uncertain Markov Decision Processes with Temporal Logic Specifications
- Creators
- Wolff, Eric M.
- Topcu, Ufuk
-
Murray, Richard M.
Abstract
We present a method for designing robust controllers for dynamical systems with linear temporal logic specifications. We abstract the original system by a finite Markov Decision Process (MDP) that has transition probabilities in a specified uncertainty set. A robust control policy for the MDP is generated that maximizes the worst-case probability of satisfying the specification over all transition probabilities in the uncertainty set. To do this, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic programming allows us to solve for a $\epsilon$-suboptimal robust control policy with time complexity $O(\log 1/\epsilon)$ times that for the non-robust case. We then implement this control policy on the original dynamical system.
Additional Information
The authors would like to thank Scott Livingston for helpful comments. This work was supported by an NSF Graduate Research Fellowship and the Boeing Corporation.Attached Files
Submitted - wolff_tech_final.pdf
Files
Name | Size | Download all |
---|---|---|
md5:1266c586fad85fda438ac68468755106
|
346.3 kB | Preview Download |
Additional details
- Eprint ID
- 28147
- Resolver ID
- CaltechCDSTR:2011.008
- NSF Graduate Research Fellowship
- Boeing Corporation
- Created
-
2011-09-26Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field
- Caltech groups
- Control and Dynamical Systems Technical Reports