Graph convolutional neural networks, which learn aggregations over neighbor
nodes, have achieved great performance in node classification tasks. However,
recent studies reported that such graph convolutional node classifier can be
deceived by adversarial perturbations on graphs. Abusing graph convolutions, a
node's classification result can be influenced by poisoning its neighbors.
Given an attributed graph and a node classifier, how can we evaluate robustness
against such indirect adversarial attacks? Can we generate strong adversarial
perturbations which are effective on not only one-hop neighbors, but more far
from the target? In this paper, we demonstrate that the node classifier can be
deceived with high-confidence by poisoning just a single node even two-hops or
more far from the target. Towards achieving the attack, we propose a new
approach which searches smaller perturbations on just a single node far from
the target. In our experiments, our proposed method shows 99% attack success
rate within two-hops from the target in two datasets. We also demonstrate that
m-layer graph convolutional neural networks have chance to be deceived by our
indirect attack within m-hop neighbors. The proposed attack can be used as a
benchmark in future defense attempts to develop graph convolutional neural
networks with having adversary robustness.