Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published November 2, 2021 | Published + Supplemental Material
Book Section - Chapter Open

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

Abstract

Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant. However, such a bound is often loose: it tends to over-regularize the neural network and degrade its natural accuracy. A tighter Lipschitz bound may provide a better tradeoff between natural and certified accuracy, but is generally hard to compute exactly due to non-convexity of the network. In this work, we propose an efficient and trainable \emph{local} Lipschitz upper bound by considering the interactions between activation functions (e.g. ReLU) and weight matrices. Specifically, when computing the induced norm of a weight matrix, we eliminate the corresponding rows and columns where the activation function is guaranteed to be a constant in the neighborhood of each given data point, which provides a provably tighter bound than the global Lipschitz constant of the neural network. Our method can be used as a plug-in module to tighten the Lipschitz bound in many certifiable training algorithms. Furthermore, we propose to clip activation functions (e.g., ReLU and MaxMin) with a learnable upper threshold and a sparsity loss to assist the network to achieve an even tighter local Lipschitz bound. Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and TinyImageNet datasets with various network architectures.

Additional Information

Y. Huang is supported by DARPA LwLL grants. A. Anandkumar is supported in part by Bren endowed chair, DARPA LwLL grants, Microsoft, Google, Adobe faculty fellowships, and DE Logi grant. Huan Zhang is supported by funding from the Bosch Center for Artificial Intelligence.

Attached Files

Published - NeurIPS-2021-training-certifiably-robust-neural-networks-with-efficient-local-lipschitz-bounds-Paper.pdf

Supplemental Material - NeurIPS-2021-training-certifiably-robust-neural-networks-with-efficient-local-lipschitz-bounds-Supplemental.pdf

Files

NeurIPS-2021-training-certifiably-robust-neural-networks-with-efficient-local-lipschitz-bounds-Paper.pdf

Additional details

Created:
August 20, 2023
Modified:
October 24, 2023