Recent research has found that many families of machine learning models are
vulnerable to adversarial examples: inputs that are specifically designed to
cause the target model to produce erroneous outputs. In this survey, we focus
on machine learning models in the visual domain, where methods for generating
and detecting such examples have been most extensively studied. We explore a
variety of adversarial attack methods that apply to image-space content, real
world adversarial attacks, adversarial defenses, and the transferability
property of adversarial examples. We also discuss strengths and weaknesses of
various methods of adversarial attack and defense. Our aim is to provide an
extensive coverage of the field, furnishing the reader with an intuitive
understanding of the mechanics of adversarial attack and defense mechanisms and
enlarging the community of researchers studying this fundamental set of
problems.