These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Powerful deep neural networks are vulnerable to adversarial attacks. To
obtain adversarially robust models, researchers have separately developed
adversarial training and Jacobian regularization techniques. There are abundant
theoretical and empirical studies for adversarial training, but theoretical
foundations for Jacobian regularization are still lacking. In this study, we
show that Jacobian regularization is closely related to adversarial training in
that $\ell_{2}$ or $\ell_{1}$ Jacobian regularized loss serves as an
approximate upper bound on the adversarially robust loss under $\ell_{2}$ or
$\ell_{\infty}$ adversarial attack respectively. Further, we establish the
robust generalization gap for Jacobian regularized risk minimizer via bounding
the Rademacher complexity of both the standard loss function class and Jacobian
regularization function class. Our theoretical results indicate that the norms
of Jacobian are related to both standard and robust generalization. We also
perform experiments on MNIST data classification to demonstrate that Jacobian
regularized risk minimization indeed serves as a surrogate for adversarially
robust risk minimization, and that reducing the norms of Jacobian can improve
both standard and robust generalization. This study promotes both theoretical
and empirical understandings to adversarially robust generalization via
Jacobian regularization.