Adversarial robustness research primarily focuses on L_p perturbations, and
most defenses are developed with identical training-time and test-time
adversaries. However, in real-world applications developers are unlikely to
have access to the full range of attacks or corruptions their system will face.
Furthermore, worst-case inputs are likely to be diverse and need not be
constrained to the L_p ball. To narrow in on this discrepancy between research
and reality we introduce ImageNet-UA, a framework for evaluating model
robustness against a range of unforeseen adversaries, including eighteen new
non-L_p attacks. To perform well on ImageNet-UA, defenses must overcome a
generalization gap and be robust to a diverse attacks not encountered during
training. In extensive experiments, we find that existing robustness measures
do not capture unforeseen robustness, that standard robustness techniques are
beat by alternative training strategies, and that novel methods can improve
unforeseen robustness. We present ImageNet-UA as a useful tool for the
community for improving the worst-case behavior of machine learning systems.