These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine Learning (ML) algorithms have become increasingly popular for
supporting Network Intrusion Detection Systems (NIDS). Nevertheless, extensive
research has shown their vulnerability to adversarial attacks, which involve
subtle perturbations to the inputs of the models aimed at compromising their
performance. Recent proposals have effectively leveraged Graph Neural Networks
(GNN) to produce predictions based also on the structural patterns exhibited by
intrusions to enhance the detection robustness. However, the adoption of
GNN-based NIDS introduces new types of risks. In this paper, we propose the
first formalization of adversarial attacks specifically tailored for GNN in
network intrusion detection. Moreover, we outline and model the problem space
constraints that attackers need to consider to carry out feasible structural
attacks in real-world scenarios. As a final contribution, we conduct an
extensive experimental campaign in which we launch the proposed attacks against
state-of-the-art GNN-based NIDS. Our findings demonstrate the increased
robustness of the models against classical feature-based adversarial attacks,
while highlighting their susceptibility to structure-based attacks.