Graph deep learning models, such as graph convolutional networks (GCN)
achieve remarkable performance for tasks on graph data. Similar to other types
of deep models, graph deep learning models often suffer from adversarial
attacks. However, compared with non-graph data, the discrete features, graph
connections and different definitions of imperceptible perturbations bring
unique challenges and opportunities for the adversarial attacks and defenses
for graph data. In this paper, we propose both attack and defense techniques.
For attack, we show that the discreteness problem could easily be resolved by
introducing integrated gradients which could accurately reflect the effect of
perturbing certain features or edges while still benefiting from the parallel
computations. For defense, we observe that the adversarially manipulated graph
for the targeted attack differs from normal graphs statistically. Based on this
observation, we propose a defense approach which inspects the graph and
recovers the potential adversarial perturbations. Our experiments on a number
of datasets show the effectiveness of the proposed methods.