Recent work has shown that additive threat models, which only permit the
addition of bounded noise to the pixels of an image, are insufficient for fully
capturing the space of imperceivable adversarial examples. For example, small
rotations and spatial transformations can fool classifiers, remain
imperceivable to humans, but have large additive distance from the original
images. In this work, we leverage quantitative perceptual metrics like LPIPS
and SSIM to define a novel threat model for adversarial attacks.
To demonstrate the value of quantifying the perceptual distortion of
adversarial examples, we present and employ a unifying framework fusing
different attack styles. We first prove that our framework results in images
that are unattainable by attack styles in isolation. We then perform
adversarial training using attacks generated by our framework to demonstrate
that networks are only robust to classes of adversarial perturbations they have
been trained against, and combination attacks are stronger than any of their
individual components. Finally, we experimentally demonstrate that our combined
attacks retain the same perceptual distortion but induce far higher
misclassification rates when compared against individual attacks.