Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 16, 2022 | Published + Submitted
Journal Article Open

Diffusion Models for Adversarial Purification

Abstract

Adversarial purification refers to a class of defense methods that remove adversarial perturbations using a generative model. These methods do not make assumptions on the form of attack and the classification model, and thus can defend pre-existing classifiers against unseen threats. However, their performance currently falls behind adversarial training methods. In this work, we propose DiffPure that uses diffusion models for adversarial purification: Given an adversarial example, we first diffuse it with a small amount of noise following a forward diffusion process, and then recover the clean image through a reverse generative process. To evaluate our method against strong adaptive attacks in an efficient and scalable way, we propose to use the adjoint method to compute full gradients of the reverse generative process. Extensive experiments on three image datasets including CIFAR-10, ImageNet and CelebA-HQ with three classifier architectures including ResNet, WideResNet and ViT demonstrate that our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods, often by a large margin. Project page: https://diffpure.github.io.

Additional Information

© 2022 by the author(s). We would like to thank the AIALGO team at NVIDIA and Anima Anandkumar's research group at Caltech for reading the paper and providing fruitful suggestions. We also thank the anonymous reviewers for helpful comments.

Attached Files

Published - nie22a.pdf

Submitted - 2205.07460.pdf

Files

nie22a.pdf
Files (25.6 MB)
Name Size Download all
md5:dbb8c84cdd8fb79ae5eeb2168940d3f9
12.7 MB Preview Download
md5:f5e07c2457599efa6475e6f0c3743ca8
12.9 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 24, 2023