Deep neural networks (DNNs) have achieved significant performance in various
tasks. However, recent studies have shown that DNNs can be easily fooled by
small perturbation on the input, called adversarial attacks. As the extensions
of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to
inherit this vulnerability. Adversary can mislead GNNs to give wrong
predictions by modifying the graph structure such as manipulating a few edges.
This vulnerability has arisen tremendous concerns for adapting GNNs in
safety-critical applications and has attracted increasing research attention in
recent years. Thus, it is necessary and timely to provide a comprehensive
overview of existing graph adversarial attacks and the countermeasures. In this
survey, we categorize existing attacks and defenses, and review the
corresponding state-of-the-art methods. Furthermore, we have developed a
repository with representative algorithms
(https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph). The
repository enables us to conduct empirical studies to deepen our understandings
on attacks and defenses on graphs.