Published August 31, 2018
| Published
Journal Article
Open
Temporal Logic Control of POMDPs via Label-based Stochastic Simulation Relations
Chicago
Abstract
The synthesis of controllers guaranteeing linear temporal logic specifications on partially observable Markov decision processes (POMDP) via their belief models causes computational issues due to the continuous spaces. In this work, we construct a finite-state abstraction on which a control policy is synthesized and refined back to the original belief model. We introduce a new notion of label-based approximate stochastic simulation to quantify the deviation between belief models. We develop a robust synthesis methodology that yields a lower bound on the satisfaction probability, by compensating for deviations a priori, and that utilizes a less conservative control refinement.
Additional Information
© 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. Available online 31 August 2018. This research was carried out at JPL and Caltech under a contract with the NASA and funded through the President's and Director's Fund Program.Attached Files
Published - 1-s2.0-S2405896318311625-main.pdf
Files
1-s2.0-S2405896318311625-main.pdf
Files
(493.2 kB)
Name | Size | Download all |
---|---|---|
md5:d77533357b91bab7c927ae34bfbd6b02
|
493.2 kB | Preview Download |
Additional details
- Eprint ID
- 89577
- Resolver ID
- CaltechAUTHORS:20180912-130453647
- NASA/JPL/Caltech
- JPL President and Director's Fund
- Created
-
2018-09-12Created from EPrint's datestamp field
- Updated
-
2022-05-17Created from EPrint's last_modified field