Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published February 5, 2019 | Submitted
Report Open

Detecting Adversarial Examples via Neural Fingerprinting

Abstract

Deep neural networks are vulnerable to adversarial examples, which dramatically alter model output using small input changes. We propose Neural Fingerprinting, a simple, yet effective method to detect adversarial examples by verifying whether model behavior is consistent with a set of secret fingerprints, inspired by the use of biometric and cryptographic signatures. The benefits of our method are that 1) it is fast, 2) it is prohibitively expensive for an attacker to reverse-engineer which fingerprints were used, and 3) it does not assume knowledge of the adversary. In this work, we pose a formal framework to analyze fingerprints under various threat models, and characterize Neural Fingerprinting for linear models. For complex neural networks, we empirically demonstrate that Neural Fingerprinting significantly improves on state-of-the-art detection mechanisms by detecting the strongest known adversarial attacks with 98-100% AUC-ROC scores on the MNIST, CIFAR-10 and MiniImagenet (20 classes) datasets. In particular, the detection accuracy of Neural Fingerprinting generalizes well to unseen test-data under various black- and whitebox threat models, and is robust over a wide range of hyperparameters and choices of fingerprints.

Additional Information

This work is supported in part by NSF grants #1564330, #1637598, #1545126; STARnet, a Semiconductor Research Corporation program, sponsored by MARCO and DARPA; and gifts from Bloomberg and Northrop Grumman. The authors would like to thank Xingjun Ma for providing the relevant baseline numbers for comparison.

Attached Files

Submitted - 1803.03870.pdf

Files

1803.03870.pdf
Files (1.9 MB)
Name Size Download all
md5:3136511f4c25adf3709c54db1b6e4873
1.9 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023