With the widespread use of machine learning, concerns over its security and
reliability have become prevalent. As such, many have developed defenses to
harden neural networks against adversarial examples, imperceptibly perturbed
inputs that are reliably misclassified. Adversarial training in which
adversarial examples are generated and used during training is one of the few
known defenses able to reliably withstand such attacks against neural networks.
However, adversarial training imposes a significant training overhead and
scales poorly with model complexity and input dimension. In this paper, we
propose Robust Representation Matching (RRM), a low-cost method to transfer the
robustness of an adversarially trained model to a new model being trained for
the same task irrespective of architectural differences. Inspired by
student-teacher learning, our method introduces a novel training loss that
encourages the student to learn the teacher's robust representations. Compared
to prior works, RRM is superior with respect to both model performance and
adversarial training time. On CIFAR-10, RRM trains a robust model $\sim
1.8\times$ faster than the state-of-the-art. Furthermore, RRM remains effective
on higher-dimensional datasets. On Restricted-ImageNet, RRM trains a ResNet50
model $\sim 18\times$ faster than standard adversarial training.