Despite the efficiency and scalability of machine learning systems, recent
studies have demonstrated that many classification methods, especially deep
neural networks (DNNs), are vulnerable to adversarial examples; i.e., examples
that are carefully crafted to fool a well-trained classification model while
being indistinguishable from natural data to human. This makes it potentially
unsafe to apply DNNs or related methods in security-critical areas. Since this
issue was first identified by Biggio et al. (2013) and Szegedy et al.(2014),
much work has been done in this field, including the development of attack
methods to generate adversarial examples and the construction of defense
techniques to guard against such examples. This paper aims to introduce this
topic and its latest developments to the statistical community, primarily
focusing on the generation and guarding of adversarial examples. Computing
codes (in python and R) used in the numerical experiments are publicly
available for readers to explore the surveyed methods. It is the hope of the
authors that this paper will encourage more statisticians to work on this
important and exciting field of generating and defending against adversarial
examples.