These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Artificial neural networks in general and deep learning networks in
particular established themselves as popular and powerful machine learning
algorithms. While the often tremendous sizes of these networks are beneficial
when solving complex tasks, the tremendous number of parameters also causes
such networks to be vulnerable to malicious behavior such as adversarial
perturbations. These perturbations can change a model's classification
decision. Moreover, while single-step adversaries can easily be transferred
from network to network, the transfer of more powerful multi-step adversaries
has - usually -- been rather difficult. In this work, we introduce a method for
generating strong ad-versaries that can easily (and frequently) be transferred
between different models. This method is then used to generate a large set of
adversaries, based on which the effects of selected defense methods are
experimentally assessed. At last, we introduce a novel, simple, yet effective
approach to enhance the resilience of neural networks against adversaries and
benchmark it against established defense methods. In contrast to the already
existing methods, our proposed defense approach is much more efficient as it
only requires a single additional forward-pass to achieve comparable
performance results.