Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 2020 | Submitted
Book Section - Chapter Open

Risk-Averse Planning Under Uncertainty

Abstract

We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To overcome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to sub-optimal solutions with lower coherent risk.

Additional Information

© 2020 AACC.

Attached Files

Submitted - 1909.12499.pdf

Files

1909.12499.pdf
Files (1.8 MB)
Name Size Download all
md5:40b4b0150a782c9a3f66677dff933cc4
1.8 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
December 22, 2023