Deep learning classifiers are known to be vulnerable to adversarial examples.
A recent paper presented at ICML 2019 proposed a statistical test detection
method based on the observation that logits of noisy adversarial examples are
biased toward the true class. The method is evaluated on CIFAR-10 dataset and
is shown to achieve 99% true positive rate (TPR) at only 1% false positive rate
(FPR). In this paper, we first develop a classifier-based adaptation of the
statistical test method and show that it improves the detection performance. We
then propose Logit Mimicry Attack method to generate adversarial examples such
that their logits mimic those of benign images. We show that our attack
bypasses both statistical test and classifier-based methods, reducing their TPR
to less than 2:2% and 1:6%, respectively, even at 5% FPR. We finally show that
a classifier-based detector that is trained with logits of mimicry adversarial
examples can be evaded by an adaptive attacker that specifically targets the
detector. Furthermore, even a detector that is iteratively trained to defend
against adaptive attacker cannot be made robust, indicating that statistics of
logits cannot be used to detect adversarial examples.