With the great success of graph embedding model on both academic and industry
area, the robustness of graph embedding against adversarial attack inevitably
becomes a central problem in graph learning domain. Regardless of the fruitful
progress, most of the current works perform the attack in a white-box fashion:
they need to access the model predictions and labels to construct their
adversarial loss. However, the inaccessibility of model predictions in real
systems makes the white-box attack impractical to real graph learning system.
This paper promotes current frameworks in a more general and flexible sense --
we demand to attack various kinds of graph embedding model with black-box
driven. To this end, we begin by investigating the theoretical connections
between graph signal processing and graph embedding models in a principled way
and formulate the graph embedding model as a general graph signal process with
corresponding graph filter. As such, a generalized adversarial attacker:
GF-Attack is constructed by the graph filter and feature matrix. Instead of
accessing any knowledge of the target classifiers used in graph embedding,
GF-Attack performs the attack only on the graph filter in a black-box attack
fashion. To validate the generalization of GF-Attack, we construct the attacker
on four popular graph embedding models. Extensive experimental results validate
the effectiveness of our attacker on several benchmark datasets. Particularly
by using our attack, even small graph perturbations like one-edge flip is able
to consistently make a strong attack in performance to different graph
embedding models.