These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Deep neural networks (DNN) have achieved unprecedented success in numerous
machine learning tasks in various domains. However, the existence of
adversarial examples has raised concerns about applying deep learning to
safety-critical applications. As a result, we have witnessed increasing
interests in studying attack and defense mechanisms for DNN models on different
data types, such as images, graphs and text. Thus, it is necessary to provide a
systematic and comprehensive overview of the main threats of attacks and the
success of corresponding countermeasures. In this survey, we review the state
of the art algorithms for generating adversarial examples and the
countermeasures against adversarial examples, for the three popular data types,
i.e., images, graphs and text.