FI-ODE: Certified and Robust Forward Invariance in Neural ODEs
Abstract
We study how to certifiably enforce forward invariance properties in neural ODEs. Forward invariance implies that the hidden states of the ODE will stay in a "good" region, and a robust version would hold even under adversarial perturbations to the input. Such properties can be used to certify desirable behaviors such as adversarial robustness (the hidden states stay in the region that generates accurate classification even under input perturbations) and safety in continuous control (the system never leaves some safe set). We develop a general approach using tools from non-linear control theory and sampling-based verification. Our approach empirically produces the strongest adversarial robustness guarantees compared to prior work on certifiably robust ODE-based models (including implicit-depth models).
Additional Information
This work is funded in part by AeroVironment and NSF #1918865.Additional details
- Eprint ID
- 118474
- Resolver ID
- CaltechAUTHORS:20221219-234122405
- AeroVironment
- NSF
- CCF-1918865
- Created
-
2022-12-21Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field