These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Adversarial training is arguably the most popular way to provide empirical
robustness against specific adversarial examples. While variants based on
multi-step attacks incur significant computational overhead, single-step
variants are vulnerable to a failure mode known as catastrophic overfitting,
which hinders their practical utility for large perturbations. A parallel line
of work, certified training, has focused on producing networks amenable to
formal guarantees of robustness against any possible attack. However, the wide
gap between the best-performing empirical and certified defenses has severely
limited the applicability of the latter. Inspired by recent developments in
certified training, which rely on a combination of adversarial attacks with
network over-approximations, and by the connections between local linearity and
catastrophic overfitting, we present experimental evidence on the practical
utility and limitations of using certified training towards empirical
robustness. We show that, when tuned for the purpose, a recent certified
training algorithm can prevent catastrophic overfitting on single-step attacks,
and that it can bridge the gap to multi-step baselines under appropriate
experimental settings. Finally, we present a conceptually simple regularizer
for network over-approximations that can achieve similar effects while markedly
reducing runtime.