Learning-based Adaptive Control using Contraction Theory
Abstract
Adaptive control is subject to stability and performance issues when a learned model is used to enhance its performance. This paper thus presents a deep learning-based adaptive control framework for nonlinear systems with multiplicatively-separable parametrization, called adaptive Neural Contraction Metric (aNCM). The aNCM approximates real-time optimization for computing a differential Lyapunov function and a corresponding stabilizing adaptive control law by using a Deep Neural Network (DNN). The use of DNNs permits real-time implementation of the control law and broad applicability to a variety of nonlinear systems with parametric and nonparametric uncertainties. We show using contraction theory that the aNCM ensures exponential boundedness of the distance between the target and controlled trajectories in the presence of parametric uncertainties of the model, learning errors caused by aNCM approximation, and external disturbances. Its superiority to the existing robust and adaptive control methods is demonstrated using a cart-pole balancing model.
Additional Information
© 2021 IEEE. This work was in part funded by the Jet Propulsion Laboratory, California Institute of Technology. Code: https://github.com/astrohiro/ancm.Attached Files
Submitted - 2103.02987.pdf
Files
Name | Size | Download all |
---|---|---|
md5:821a63a62f14d09fe2ee226a9e20ac27
|
755.2 kB | Preview Download |
Additional details
- Alternative title
- Learning-based Adaptive Control via Contraction Theory
- Eprint ID
- 109052
- Resolver ID
- CaltechAUTHORS:20210510-141344204
- JPL/Caltech
- Created
-
2021-05-10Created from EPrint's datestamp field
- Updated
-
2022-02-16Created from EPrint's last_modified field
- Caltech groups
- GALCIT