Safety-Aware Preference-Based Learning for Safety-Critical Control
Abstract
Bringing dynamic robots into the wild requires a tenuous balance between performance and safety. Yet controllers designed to provide robust safety guarantees often result in conservative behavior, and tuning these controllers to find the ideal trade-off between performance and safety typically requires domain expertise or a carefully constructed reward function. This work presents a design paradigm for systematically achieving behaviors that balance performance and robust safety by integrating safety-aware Preference-Based Learning (PBL) with Control Barrier Functions (CBFs). Fusing these concepts -- safety-aware learning and safety-critical control -- gives a robust means to achieve safe behaviors on complex robotic systems in practice. We demonstrate the capability of this design paradigm to achieve safe and performant perception-based autonomous operation of a quadrupedal robot both in simulation and experimentally on hardware.
Additional Information
© 2022 R.K. Cosner, M. Tucker, A.J. Taylor, K. Li, T.G. Molnar, W. Ubellacker, A. Alan, G. Orosz, Y. Yue & A.D. Ames. Attribution 4.0 International (CC BY 4.0).Attached Files
Submitted - 2112.08516.pdf
Files
Name | Size | Download all |
---|---|---|
md5:ca022c4d360bd837d180e6c16051ca15
|
5.9 MB | Preview Download |
Additional details
- Eprint ID
- 113589
- Resolver ID
- CaltechAUTHORS:20220224-200843937
- Created
-
2022-02-28Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field