Deep learning image classification is vulnerable to adversarial attack, even
if the attacker changes just a small patch of the image. We propose a defense
against patch attacks based on partially occluding the image around each
candidate patch location, so that a few occlusions each completely hide the
patch. We demonstrate on CIFAR-10, Fashion MNIST, and MNIST that our defense
provides certified security against patch attacks of a certain size.