The resizing of images, which is typically a required part of preprocessing
for computer vision systems, is vulnerable to attack. Images can be created
such that the image is completely different at machine-vision scales than at
other scales and the default settings for some common computer vision and
machine learning systems are vulnerable. We show that defenses exist and are
trivial to administer provided that defenders are aware of the threat. These
attacks and defenses help to establish the role of input sanitization in
machine learning.