Recent works have empirically shown that there exist adversarial examples
that can be hidden from neural network interpretability (namely, making network
interpretation maps visually similar), or interpretability is itself
susceptible to adversarial attacks. In this paper, we theoretically show that
with a proper measurement of interpretation, it is actually difficult to
prevent prediction-evasion adversarial attacks from causing interpretation
discrepancy, as confirmed by experiments on MNIST, CIFAR-10 and Restricted
ImageNet. Spurred by that, we develop an interpretability-aware defensive
scheme built only on promoting robust interpretation (without the need for
resorting to adversarial loss minimization). We show that our defense achieves
both robust classification and robust interpretation, outperforming
state-of-the-art adversarial training methods against attacks of large
perturbation in particular.