Despite their impressive performance, deep neural networks exhibit striking
failures on out-of-distribution inputs. One core idea of adversarial example
research is to reveal neural network errors under such distribution shifts. We
decompose these errors into two complementary sources: sensitivity and
invariance. We show deep networks are not only too sensitive to task-irrelevant
changes of their input, as is well-known from epsilon-adversarial examples, but
are also too invariant to a wide range of task-relevant changes, thus making
vast regions in input space vulnerable to adversarial attacks. We show such
excessive invariance occurs across various tasks and architecture types. On
MNIST and ImageNet one can manipulate the class-specific content of almost any
image without changing the hidden activations. We identify an insufficiency of
the standard cross-entropy loss as a reason for these failures. Further, we
extend this objective based on an information-theoretic analysis so it
encourages the model to consider all task-dependent features in its decision.
This provides the first approach tailored explicitly to overcome excessive
invariance and resulting vulnerabilities.