These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The black-box nature of artificial intelligence (AI) models has been the
source of many concerns in their use for critical applications. Explainable
Artificial Intelligence (XAI) is a rapidly growing research field that aims to
create machine learning models that can provide clear and interpretable
explanations for their decisions and actions. In the field of network
cybersecurity, XAI has the potential to revolutionize the way we approach
network security by enabling us to better understand the behavior of cyber
threats and to design more effective defenses. In this survey, we review the
state of the art in XAI for cybersecurity in network systems and explore the
various approaches that have been proposed to address this important problem.
The review follows a systematic classification of network-driven cybersecurity
threats and issues. We discuss the challenges and limitations of current XAI
methods in the context of cybersecurity and outline promising directions for
future research.