These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Malware detection is a constant challenge in cybersecurity due to the rapid
development of new attack techniques. Traditional signature-based approaches
struggle to keep pace with the sheer volume of malware samples. Machine
learning offers a promising solution, but faces issues of generalization to
unseen samples and a lack of explanation for the instances identified as
malware. However, human-understandable explanations are especially important in
security-critical fields, where understanding model decisions is crucial for
trust and legal compliance. While deep learning models excel at malware
detection, their black-box nature hinders explainability. Conversely,
interpretable models often fall short in performance. To bridge this gap in
this application domain, we propose the use of Logic Explained Networks (LENs),
which are a recently proposed class of interpretable neural networks providing
explanations in the form of First-Order Logic (FOL) rules. This paper extends
the application of LENs to the complex domain of malware detection,
specifically using the large-scale EMBER dataset. In the experimental results
we show that LENs achieve robustness that exceeds traditional interpretable
methods and that are rivaling black-box models. Moreover, we introduce a
tailored version of LENs that is shown to generate logic explanations with
higher fidelity with respect to the model's predictions.