Despite the recent advances in a wide spectrum of applications, machine
learning models, especially deep neural networks, have been shown to be
vulnerable to adversarial attacks. Attackers add carefully-crafted
perturbations to input, where the perturbations are almost imperceptible to
humans, but can cause models to make wrong predictions. Techniques to protect
models against adversarial input are called adversarial defense methods.
Although many approaches have been proposed to study adversarial attacks and
defenses in different scenarios, an intriguing and crucial challenge remains
that how to really understand model vulnerability? Inspired by the saying that
"if you know yourself and your enemy, you need not fear the battles", we may
tackle the aforementioned challenge after interpreting machine learning models
to open the black-boxes. The goal of model interpretation, or interpretable
machine learning, is to extract human-understandable terms for the working
mechanism of models. Recently, some approaches start incorporating
interpretation into the exploration of adversarial attacks and defenses.
Meanwhile, we also observe that many existing methods of adversarial attacks
and defenses, although not explicitly claimed, can be understood from the
perspective of interpretation. In this paper, we review recent work on
adversarial attacks and defenses, particularly from the perspective of machine
learning interpretation. We categorize interpretation into two types,
feature-level interpretation and model-level interpretation. For each type of
interpretation, we elaborate on how it could be used for adversarial attacks
and defenses. We then briefly illustrate additional correlations between
interpretation and adversaries. Finally, we discuss the challenges and future
directions along tackling adversary issues with interpretation.