Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 14, 2021 | Accepted Version
Book Section - Chapter Open

A Theoretical Overview of Neural Contraction Metrics for Learning-based Control with Guaranteed Stability

Abstract

This paper presents a theoretical overview of a Neural Contraction Metric (NCM): a neural network model of an optimal contraction metric and corresponding differential Lyapunov function, the existence of which is a necessary and sufficient condition for incremental exponential stability of non-autonomous nonlinear system trajectories. Its innovation lies in providing formal robustness guarantees for learning-based control frameworks, utilizing contraction theory as an analytical tool to study the nonlinear stability of learned systems via convex optimization. In particular, we rigorously show in this paper that, by regarding modeling errors of the learning schemes as external disturbances, the NCM control is capable of obtaining an explicit bound on the distance between a time-varying target trajectory and perturbed solution trajectories, which exponentially decreases with time even under the presence of deterministic and stochastic perturbation. These useful features permit simultaneous synthesis of a contraction metric and associated control law by a neural network, thereby enabling real-time computable and probably robust learning-based control for general control-affine nonlinear systems.

Additional Information

© 2021 IEEE.

Attached Files

Accepted Version - 2110.00693.pdf

Files

2110.00693.pdf
Files (1.0 MB)
Name Size Download all
md5:a282485677815cafdf22e1e10a040836
1.0 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023