These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The effectiveness of Intrusion Detection Systems (IDS) is critical in an era
where cyber threats are becoming increasingly complex. Machine learning (ML)
and deep learning (DL) models provide an efficient and accurate solution for
identifying attacks and anomalies in computer networks. However, using ML and
DL models in IDS has led to a trust deficit due to their non-transparent
decision-making. This transparency gap in IDS research is significant,
affecting confidence and accountability. To address, this paper introduces a
novel Explainable IDS approach, called X-CBA, that leverages the structural
advantages of Graph Neural Networks (GNNs) to effectively process network
traffic data, while also adapting a new Explainable AI (XAI) methodology.
Unlike most GNN-based IDS that depend on labeled network traffic and node
features, thereby overlooking critical packet-level information, our approach
leverages a broader range of traffic data through network flows, including edge
attributes, to improve detection capabilities and adapt to novel threats.
Through empirical testing, we establish that our approach not only achieves
high accuracy with 99.47% in threat detection but also advances the field by
providing clear, actionable explanations of its analytical outcomes. This
research also aims to bridge the current gap and facilitate the broader
integration of ML/DL technologies in cybersecurity defenses by offering a local
and global explainability solution that is both precise and interpretable.