With the growth of adversarial attacks against machine learning models,
several concerns have emerged about potential vulnerabilities in designing deep
neural network-based intrusion detection systems (IDS). In this paper, we study
the resilience of deep learning-based intrusion detection systems against
adversarial attacks. We apply the min-max (or saddle-point) approach to train
intrusion detection systems against adversarial attack samples in NSW-NB 15
dataset. We have the max approach for generating adversarial samples that
achieves maximum loss and attack deep neural networks. On the other side, we
utilize the existing min approach [2] [9] as a defense strategy to optimize
intrusion detection systems that minimize the loss of the incorporated
adversarial samples during the adversarial training. We study and measure the
effectiveness of the adversarial attack methods as well as the resistance of
the adversarially trained models against such attacks. We find that the
adversarial attack methods that were designed in binary domains can be used in
continuous domains and exhibit different misclassification levels. We finally
show that principal component analysis (PCA) based feature reduction can boost
the robustness in intrusion detection system (IDS) using a deep neural network
(DNN).