As machine learning algorithms continue to improve, there is an increasing
need for explaining why a model produces a certain prediction for a certain
input. In recent years, several methods for model interpretability have been
developed, aiming to provide explanation of which subset regions of the model
input is the main reason for the model prediction. In parallel, a significant
research community effort is occurring in recent years for developing
adversarial example generation methods for fooling models, while not altering
the true label of the input,as it would have been classified by a human
annotator. In this paper, we bridge the gap between adversarial example
generation and model interpretability, and introduce a modification to the
adversarial example generation process which encourages better
interpretability. We analyze the proposed method on a public medical imaging
dataset, both quantitatively and qualitatively, and show that it significantly
outperforms the leading known alternative method. Our suggested method is
simple to implement, and can be easily plugged into most common adversarial
example generation frameworks. Additionally, we propose an explanation quality
metric - $APE$ - "Adversarial Perturbative Explanation", which measures how
well an explanation describes model decisions.