Adversarial examples are inputs to a machine learning system intentionally
crafted by an attacker to fool the model into producing an incorrect output.
These examples have achieved a great deal of success in several domains such as
image recognition, speech recognition and spam detection. In this paper, we
study the nature of the adversarial problem in Network Intrusion Detection
Systems (NIDS). We focus on the attack perspective, which includes techniques
to generate adversarial examples capable of evading a variety of machine
learning models. More specifically, we explore the use of evolutionary
computation (particle swarm optimization and genetic algorithm) and deep
learning (generative adversarial networks) as tools for adversarial example
generation. To assess the performance of these algorithms in evading a NIDS, we
apply them to two publicly available data sets, namely the NSL-KDD and
UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo
simulation. The results show that our adversarial example generation techniques
cause high misclassification rates in eleven different machine learning models,
along with a voting classifier. Our work highlights the vulnerability of
machine learning based NIDS in the face of adversarial perturbation.