The application of Artificial Intelligence (AI) and Machine Learning (ML) to
cybersecurity challenges has gained traction in industry and academia,
partially as a result of widespread malware attacks on critical systems such as
cloud infrastructures and government institutions. Intrusion Detection Systems
(IDS), using some forms of AI, have received widespread adoption due to their
ability to handle vast amounts of data with a high prediction accuracy. These
systems are hosted in the organizational Cyber Security Operation Center (CSoC)
as a defense tool to monitor and detect malicious network flow that would
otherwise impact the Confidentiality, Integrity, and Availability (CIA). CSoC
analysts rely on these systems to make decisions about the detected threats.
However, IDSs designed using Deep Learning (DL) techniques are often treated as
black box models and do not provide a justification for their predictions. This
creates a barrier for CSoC analysts, as they are unable to improve their
decisions based on the model's predictions. One solution to this problem is to
design explainable IDS (X-IDS).
This survey reviews the state-of-the-art in explainable AI (XAI) for IDS, its
current challenges, and discusses how these challenges span to the design of an
X-IDS. In particular, we discuss black box and white box approaches
comprehensively. We also present the tradeoff between these approaches in terms
of their performance and ability to produce explanations. Furthermore, we
propose a generic architecture that considers human-in-the-loop which can be
used as a guideline when designing an X-IDS. Research recommendations are given
from three critical viewpoints: the need to define explainability for IDS, the
need to create explanations tailored to various stakeholders, and the need to
design metrics to evaluate explanations.