The vulnerabilities of deep neural networks against adversarial examples have
become a significant concern for deploying these models in sensitive domains.
Devising a definitive defense against such attacks is proven to be challenging,
and the methods relying on detecting adversarial samples are only valid when
the attacker is oblivious to the detection mechanism. In this paper we propose
a principled adversarial example detection method that can withstand
norm-constrained white-box attacks. Inspired by one-versus-the-rest
classification, in a K class classification problem, we train K binary
classifiers where the i-th binary classifier is used to distinguish between
clean data of class i and adversarially perturbed samples of other classes. At
test time, we first use a trained classifier to get the predicted label (say k)
of the input, and then use the k-th binary classifier to determine whether the
input is a clean sample (of class k) or an adversarially perturbed example (of
other classes). We further devise a generative approach to
detecting/classifying adversarial examples by interpreting each binary
classifier as an unnormalized density model of the class-conditional data. We
provide comprehensive evaluation of the above adversarial example
detection/classification methods, and demonstrate their competitive
performances and compelling properties.