LyaNet: A Lyapunov Framework for Training Neural ODEs
Abstract
We propose a method for training ordinary differential equations by using a control-theoretic Lyapunov condition for stability. Our approach, called LyaNet, is based on a novel Lyapunov loss formulation that encourages the inference dynamics to converge quickly to the correct prediction. Theoretically, we show that minimizing Lyapunov loss guarantees exponential convergence to the correct solution and enables a novel robustness guarantee. We also provide practical algorithms, including one that avoids the cost of backpropagating through a solver or using the adjoint method. Relative to standard Neural ODE training, we empirically find that LyaNet can offer improved prediction performance, faster convergence of inference dynamics, and improved adversarial robustness. Our code is available at https://github.com/ivandariojr/LyapunovLearning.
Additional Information
© 2022 by the author(s). We would like to thank Andrew Taylor, Jeremy Bernstein, Yujia Huang and Matt Levine for the insightful discussions. This project was funded in part by AeroVironment.Attached Files
Published - rodriguez22a.pdf
Submitted - 2202.02526.pdf
Files
Name | Size | Download all |
---|---|---|
md5:9de39f874818039f7bb2dfaef650b60e
|
1.7 MB | Preview Download |
md5:4d8cb899b96464eb43142ee5aa1795c5
|
1.7 MB | Preview Download |
Additional details
- Eprint ID
- 115590
- Resolver ID
- CaltechAUTHORS:20220714-224552040
- AeroVironment
- Created
-
2022-07-15Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field