Explanation methods aim to make neural networks more trustworthy and
interpretable. In this paper, we demonstrate a property of explanation methods
which is disconcerting for both of these purposes. Namely, we show that
explanations can be manipulated arbitrarily by applying visually hardly
perceptible perturbations to the input that keep the network's output
approximately constant. We establish theoretically that this phenomenon can be
related to certain geometrical properties of neural networks. This allows us to
derive an upper bound on the susceptibility of explanations to manipulations.
Based on this result, we propose effective mechanisms to enhance the robustness
of explanations.