Published August 31, 2018
| Published
Journal Article
Open
Temporal logic control of general Markov decision processes by approximate policy refinement
Abstract
The formal verification and controller synthesis for general Markov decision processes (gMDPs) that evolve over uncountable state spaces are computationally hard and thus generally rely on the use of approximate abstractions. In this paper, we contribute to the state of the art of control synthesis for temporal logic properties by computing and quantifying a less conservative gridding of the continuous state space of linear stochastic dynamic systems and by giving a new approach for control synthesis and verification that is robust to the incurred approximation errors. The approximation errors are expressed as both deviations in the outputs of the gMDPs and in the probabilistic transitions.
Additional Information
© 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. Available online 31 August 2018.Attached Files
Published - 1-s2.0-S2405896318311261-main.pdf
Files
1-s2.0-S2405896318311261-main.pdf
Files
(576.0 kB)
Name | Size | Download all |
---|---|---|
md5:af39c92c1a50480c614fe860a8c92a40
|
576.0 kB | Preview Download |
Additional details
- Eprint ID
- 89584
- Resolver ID
- CaltechAUTHORS:20180912-141109196
- Created
-
2018-09-12Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field