Cybersecurity is a domain where the data distribution is constantly changing
with attackers exploring newer patterns to attack cyber infrastructure.
Intrusion detection system is one of the important layers in cyber safety in
today's world. Machine learning based network intrusion detection systems
started showing effective results in recent years. With deep learning models,
detection rates of network intrusion detection system are improved. More
accurate the model, more the complexity and hence less the interpretability.
Deep neural networks are complex and hard to interpret which makes difficult to
use them in production as reasons behind their decisions are unknown. In this
paper, we have used deep neural network for network intrusion detection and
also proposed explainable AI framework to add transparency at every stage of
machine learning pipeline. This is done by leveraging Explainable AI algorithms
which focus on making ML models less of black boxes by providing explanations
as to why a prediction is made. Explanations give us measurable factors as to
what features influence the prediction of a cyberattack and to what degree.
These explanations are generated from SHAP, LIME, Contrastive Explanations
Method, ProtoDash and Boolean Decision Rules via Column Generation. We apply
these approaches to NSL KDD dataset for intrusion detection system and
demonstrate results.