Deep neural networks have presented impressive performance in biometric
applications. However, their performance is highly at risk when facing
carefully crafted input samples known as adversarial examples. In this paper,
we present three defense strategies to detect adversarial iris examples. These
defense strategies are based on wavelet domain denoising of the input examples
by investigating each wavelet sub-band and removing the sub-bands that are most
affected by the adversary. The first proposed defense strategy reconstructs
multiple denoised versions of the input example through manipulating the mid-
and high-frequency components of the wavelet domain representation of the input
example and makes a decision upon the classification result of the majority of
the denoised examples. The second and third proposed defense strategies aim to
denoise each wavelet domain sub-band and determine the sub-bands that are most
likely affected by the adversary using the reconstruction error computed for
each sub-band. We test the performance of the proposed defense strategies
against several attack scenarios and compare the results with five state of the
art defense strategies.