These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine Learning (ML) has become pervasive, and its deployment in Network
Intrusion Detection Systems (NIDS) is inevitable due to its automated nature
and high accuracy compared to traditional models in processing and classifying
large volumes of data. However, ML has been found to have several flaws, most
importantly, adversarial attacks, which aim to trick ML models into producing
faulty predictions. While most adversarial attack research focuses on computer
vision datasets, recent studies have explored the suitability of these attacks
against ML-based network security entities, especially NIDS, due to the wide
difference between different domains regarding the generation of adversarial
attacks.
To further explore the practicality of adversarial attacks against ML-based
NIDS in-depth, this paper presents several key contributions: identifying
numerous practicality issues for evasion adversarial attacks on ML-NIDS using
an attack tree threat model, introducing a taxonomy of practicality issues
associated with adversarial attacks against ML-based NIDS, identifying specific
leaf nodes in our attack tree that demonstrate some practicality for real-world
implementation and conducting a comprehensive review and exploration of these
potentially viable attack approaches, and investigating how the dynamicity of
real-world ML models affects evasion adversarial attacks against NIDS. Our
experiments indicate that continuous re-training, even without adversarial
training, can reduce the effectiveness of adversarial attacks. While
adversarial attacks can compromise ML-based NIDSs, our aim is to highlight the
significant gap between research and real-world practicality in this domain,
which warrants attention.