Graph Neural Networks (GNNs), a generalization of neural networks to
graph-structured data, are often implemented using message passes between
entities of a graph. While GNNs are effective for node classification, link
prediction and graph classification, they are vulnerable to adversarial
attacks, i.e., a small perturbation to the structure can lead to a non-trivial
performance degradation. In this work, we propose Uncertainty Matching GNN
(UM-GNN), that is aimed at improving the robustness of GNN models, particularly
against poisoning attacks to the graph structure, by leveraging epistemic
uncertainties from the message passing framework. More specifically, we propose
to build a surrogate predictor that does not directly access the graph
structure, but systematically extracts reliable knowledge from a standard GNN
through a novel uncertainty-matching strategy. Interestingly, this uncoupling
makes UM-GNN immune to evasion attacks by design, and achieves significantly
improved robustness against poisoning attacks. Using empirical studies with
standard benchmarks and a suite of global and target attacks, we demonstrate
the effectiveness of UM-GNN, when compared to existing baselines including the
state-of-the-art robust GCN.