Limits of Probabilistic Safety Guarantees when Considering Human Uncertainty
Abstract
When autonomous robots interact with humans, such as during autonomous driving, explicit safety guarantees are crucial in order to avoid potentially life-threatening accidents. Many data-driven methods have explored learning probabilistic bounds over human agents' trajectories (i.e. confidence tubes that contain trajectories with probability δ), which can then be used to guarantee safety with probability 1− δ. However, almost all existing works consider δ ≥ 0.001. The purpose of this paper is to argue that (1) in safety-critical applications, it is necessary to provide safety guarantees with δ < 10⁻⁸, and (2) current learning-based methods are illequipped to compute accurate confidence bounds at such low δ. Using human driving data (from the highD dataset), as well as synthetically generated data, we show that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for δ ≤ 10⁻⁸. These two issues result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems.
Additional Information
© 2021 IEEE.Attached Files
Submitted - 2103.03388.pdf
Files
Name | Size | Download all |
---|---|---|
md5:b391d594b798163e4df0d957f2851a28
|
1.6 MB | Preview Download |
Additional details
- Eprint ID
- 109070
- Resolver ID
- CaltechAUTHORS:20210511-085543440
- Created
-
2021-05-11Created from EPrint's datestamp field
- Updated
-
2021-12-14Created from EPrint's last_modified field
- Caltech groups
- Division of Biology and Biological Engineering (BBE)