Labels Predicted by AI
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
Improving the resistance of deep neural networks against adversarial attacks is important for deploying models to realistic applications. However, most defense methods are designed to defend against intensity perturbations and ignore location perturbations, which should be equally important for deep model security. In this paper, we focus on adversarial deformations, a typical class of location perturbations, and propose a flow gradient regularization to improve the resistance of models. Theoretically, we prove that, compared with input gradient regularization, regularizing flow gradients is able to get a tighter bound. Over multiple datasets, architectures, and adversarial deformations, our empirical results indicate that models trained with flow gradients can acquire a better resistance than trained with input gradients with a large margin, and also better than adversarial training. Moreover, compared with directly training with adversarial deformations, our method can achieve better results in unseen attacks, and combining these two methods can improve the resistance further.