In the last decade, the use of Machine Learning techniques in anomaly-based
intrusion detection systems has seen much success. However, recent studies have
shown that Machine learning in general and deep learning specifically are
vulnerable to adversarial attacks where the attacker attempts to fool models by
supplying deceptive input. Research in computer vision, where this
vulnerability was first discovered, has shown that adversarial images designed
to fool a specific model can deceive other machine learning models. In this
paper, we investigate the transferability of adversarial network traffic
against multiple machine learning-based intrusion detection systems.
Furthermore, we analyze the robustness of the ensemble intrusion detection
system, which is notorious for its better accuracy compared to a single model,
against the transferability of adversarial attacks. Finally, we examine Detect
& Reject as a defensive mechanism to limit the effect of the transferability
property of adversarial network traffic against machine learning-based
intrusion detection systems.