Deep learning systems have become ubiquitous in many aspects of our lives.
Unfortunately, it has been shown that such systems are vulnerable to
adversarial attacks, making them prone to potential unlawful uses. Designing
deep neural networks that are robust to adversarial attacks is a fundamental
step in making such systems safer and deployable in a broader variety of
applications (e.g. autonomous driving), but more importantly is a necessary
step to design novel and more advanced architectures built on new computational
paradigms rather than marginally building on the existing ones. In this paper
we introduce PeerNets, a novel family of convolutional networks alternating
classical Euclidean convolutions with graph convolutions to harness information
from a graph of peer samples. This results in a form of non-local forward
propagation in the model, where latent features are conditioned on the global
structure induced by the graph, that is up to 3 times more robust to a variety
of white- and black-box adversarial attacks compared to conventional
architectures with almost no drop in accuracy.