Graph Attention Networks(GATs) are useful deep learning models to deal with
the graph data. However, recent works show that the classical GAT is vulnerable
to adversarial attacks. It degrades dramatically with slight perturbations.
Therefore, how to enhance the robustness of GAT is a critical problem. Robust
GAT(RoGAT) is proposed in this paper to improve the robustness of GAT based on
the revision of the attention mechanism. Different from the original GAT, which
uses the attention mechanism for different edges but is still sensitive to the
perturbation, RoGAT adds an extra dynamic attention score progressively and
improves the robustness. Firstly, RoGAT revises the edges weight based on the
smoothness assumption which is quite common for ordinary graphs. Secondly,
RoGAT further revises the features to suppress features' noise. Then, an extra
attention score is generated by the dynamic edge's weight and can be used to
reduce the impact of adversarial attacks. Different experiments against
targeted and untargeted attacks on citation data on citation data demonstrate
that RoGAT outperforms most of the recent defensive methods.