Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 2019 | Submitted
Book Section - Chapter Open

Safe Policy Synthesis in Multi-Agent POMDPs via Discrete-Time Barrier Functions

Abstract

A multi-agent partially observable Markov decision process (MPOMDP) is a modeling paradigm used for high-level planning of heterogeneous autonomous agents subject to uncertainty and partial observation. Despite their modeling efficiency, MPOMDPs have not received significant attention in safety-critical settings. In this paper, we use barrier functions to design policies for MPOMDPs that ensure safety. Notably, our method does not rely on discretizations of the belief space, or finite memory. To this end, we formulate sufficient and necessary conditions for the safety of a given set based on discrete-time barrier functions (DTBFs) and we demonstrate that our formulation also allows for Boolean compositions of DTBFs for representing more complicated safe sets. We show that the proposed method can be implemented online by a sequence of one-step greedy algorithms as a standalone safe controller or as a safety-filter given a nominal planning policy. We illustrate the efficiency of the proposed methodology based on DTBFs using a high-fidelity simulation of heterogeneous robots.

Additional Information

© 2019 IEEE.

Attached Files

Submitted - 1903.07823.pdf

Files

1903.07823.pdf
Files (2.0 MB)
Name Size Download all
md5:00be63061d9fb113e298c893f876393a
2.0 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023