Recent studies have shown that graph neural networks (GNNs) are vulnerable
against perturbations due to lack of robustness and can therefore be easily
fooled. Currently, most works on attacking GNNs are mainly using gradient
information to guide the attack and achieve outstanding performance. However,
the high complexity of time and space makes them unmanageable for large scale
graphs and becomes the major bottleneck that prevents the practical usage. We
argue that the main reason is that they have to use the whole graph for
attacks, resulting in the increasing time and space complexity as the data
scale grows. In this work, we propose an efficient Simplified Gradient-based
Attack (SGA) method to bridge this gap. SGA can cause the GNNs to misclassify
specific target nodes through a multi-stage attack framework, which needs only
a much smaller subgraph. In addition, we present a practical metric named
Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks
on graph data. We evaluate our attack method on four real-world graph networks
by attacking several commonly used GNNs. The experimental results demonstrate
that SGA can achieve significant time and memory efficiency improvements while
maintaining competitive attack performance compared to state-of-art attack
techniques. Codes are available via: https://github.com/EdisonLeeeee/SGAttack.