These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The escalating sophistication of cyberattacks has encouraged the integration
of machine learning techniques in intrusion detection systems, but the rise of
adversarial examples presents a significant challenge. These crafted
perturbations mislead ML models, enabling attackers to evade detection or
trigger false alerts. As a reaction, adversarial purification has emerged as a
compelling solution, particularly with diffusion models showing promising
results. However, their purification potential remains unexplored in the
context of intrusion detection. This paper demonstrates the effectiveness of
diffusion models in purifying adversarial examples in network intrusion
detection. Through a comprehensive analysis of the diffusion parameters, we
identify optimal configurations maximizing adversarial robustness with minimal
impact on normal performance. Importantly, this study reveals insights into the
relationship between diffusion noise and diffusion steps, representing a novel
contribution to the field. Our experiments are carried out on two datasets and
against 5 adversarial attacks. The implementation code is publicly available.