Deep Neural Networks (DNNs) have recently led to significant improvements in
many fields. However, DNNs are vulnerable to adversarial examples which are
samples with imperceptible perturbations while dramatically misleading the
DNNs. Moreover, adversarial examples can be used to perform an attack on
various kinds of DNN based systems, even if the adversary has no access to the
underlying model. Many defense methods have been proposed, such as obfuscating
gradients of the networks or detecting adversarial examples. However it is
proved out that these defense methods are not effective or cannot resist
secondary adversarial attacks. In this paper, we point out that steganalysis
can be applied to adversarial examples detection, and propose a method to
enhance steganalysis features by estimating the probability of modifications
caused by adversarial attacks. Experimental results show that the proposed
method can accurately detect adversarial examples. Moreover, secondary
adversarial attacks cannot be directly performed to our method because our
method is not based on a neural network but based on high-dimensional
artificial features and FLD (Fisher Linear Discriminant) ensemble.