The introduction of robust optimisation has pushed the state-of-the-art in
defending against adversarial attacks. Notably, the state-of-the-art projected
gradient descent (PGD)-based training method has been shown to be universally
and reliably effective in defending against adversarial inputs. This robustness
approach uses PGD as a reliable and universal "first-order adversary". However,
the behaviour of such optimisation has not been studied in the light of a
fundamentally different class of attacks called backdoors. In this paper, we
study how to inject and defend against backdoor attacks for robust models
trained using PGD-based robust optimisation. We demonstrate that these models
are susceptible to backdoor attacks. Subsequently, we observe that backdoors
are reflected in the feature representation of such models. Then, this
observation is leveraged to detect such backdoor-infected models via a
detection technique called AEGIS. Specifically, given a robust Deep Neural
Network (DNN) that is trained using PGD-based first-order adversarial training
approach, AEGIS uses feature clustering to effectively detect whether such DNNs
are backdoor-infected or clean.
In our evaluation of several visible and hidden backdoor triggers on major
classification tasks using CIFAR-10, MNIST and FMNIST datasets, AEGIS
effectively detects PGD-trained robust DNNs infected with backdoors. AEGIS
detects such backdoor-infected models with 91.6% accuracy (11 out of 12 tested
models), without any false positives. Furthermore, AEGIS detects the targeted
class in the backdoor-infected model with a reasonably low (11.1%) false
positive rate. Our investigation reveals that salient features of adversarially
robust DNNs could be promising to break the stealthy nature of backdoor
attacks.