These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
We demonstrate that the choice of optimizer, neural network architecture, and
regularizer significantly affect the adversarial robustness of linear neural
networks, providing guarantees without the need for adversarial training. To
this end, we revisit a known result linking maximally robust classifiers and
minimum norm solutions, and combine it with recent results on the implicit bias
of optimizers. First, we show that, under certain conditions, it is possible to
achieve both perfect standard accuracy and a certain degree of robustness,
simply by training an overparametrized model using the implicit bias of the
optimization. In that regime, there is a direct relationship between the type
of the optimizer and the attack to which the model is robust. To the best of
our knowledge, this work is the first to study the impact of optimization
methods such as sign gradient descent and proximal methods on adversarial
robustness. Second, we characterize the robustness of linear convolutional
models, showing that they resist attacks subject to a constraint on the
Fourier-$\ell_\infty$ norm. To illustrate these findings we design a novel
Fourier-$\ell_\infty$ attack that finds adversarial examples with controllable
frequencies. We evaluate Fourier-$\ell_\infty$ robustness of
adversarially-trained deep CIFAR-10 models from the standard RobustBench
benchmark and visualize adversarial perturbations.