These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning (ML) has become increasingly popular in network intrusion
detection. However, ML-based solutions always respond regardless of whether the
input data reflects known patterns, a common issue across safety-critical
applications. While several proposals exist for detecting Out-Of-Distribution
(OOD) in other fields, it remains unclear whether these approaches can
effectively identify new forms of intrusions for network security. New attacks,
not necessarily affecting overall distributions, are not guaranteed to be
clearly OOD as instead, images depicting new classes are in computer vision. In
this work, we investigate whether existing OOD detectors from other fields
allow the identification of unknown malicious traffic. We also explore whether
more discriminative and semantically richer embedding spaces within models,
such as those created with contrastive learning and multi-class tasks, benefit
detection. Our investigation covers a set of six OOD techniques that employ
different detection strategies. These techniques are applied to models trained
in various ways and subsequently exposed to unknown malicious traffic from the
same and different datasets (network environments). Our findings suggest that
existing detectors can identify a consistent portion of new malicious traffic,
and that improved embedding spaces enhance detection. We also demonstrate that
simple combinations of certain detectors can identify almost 100% of malicious
traffic in our tested scenarios.