Deep learning methods have shown state of the art performance in a range of
tasks from computer vision to natural language processing. However, it is well
known that such systems are vulnerable to attackers who craft inputs in order
to cause misclassification. The level of perturbation an attacker needs to
introduce in order to cause such a misclassification can be extremely small,
and often imperceptible. This is of significant security concern, particularly
where misclassification can cause harm to humans.
We thus propose Deep Latent Defence, an architecture which seeks to combine
adversarial training with a detection system. At its core Deep Latent Defence
has a adversarially trained neural network. A series of encoders take the
intermediate layer representation of data as it passes though the network and
project it to a latent space which we use for detecting adversarial samples via
a $k$-nn classifier. We present results using both grey and white box
attackers, as well as an adaptive $L_{\infty}$ bounded attack which was
constructed specifically to try and evade our defence. We find that even under
the strongest attacker model that we have investigated our defence is able to
offer significant defensive benefits.