AIセキュリティポータル K Program
Explainability Guided Adversarial Evasion Attacks on Malware Detectors
Share
Abstract
As the focus on security of Artificial Intelligence (AI) is becoming paramount, research on crafting and inserting optimal adversarial perturbations has become increasingly critical. In the malware domain, this adversarial sample generation relies heavily on the accuracy and placement of crafted perturbation with the goal of evading a trained classifier. This work focuses on applying explainability techniques to enhance the adversarial evasion attack on a machine-learning-based Windows PE malware detector. The explainable tool identifies the regions of PE malware files that have the most significant impact on the decision-making process of a given malware detector, and therefore, the same regions can be leveraged to inject the adversarial perturbation for maximum efficiency. Profiling all the PE malware file regions based on their impact on the malware detector's decision enables the derivation of an efficient strategy for identifying the optimal location for perturbation injection. The strategy should incorporate the region's significance in influencing the malware detector's decision and the sensitivity of the PE malware file's integrity towards modifying that region. To assess the utility of explainable AI in crafting an adversarial sample of Windows PE malware, we utilize the DeepExplainer module of SHAP for determining the contribution of each region of PE malware to its detection by a CNN-based malware detector, MalConv. Furthermore, we analyzed the significance of SHAP values at a more granular level by subdividing each section of Windows PE into small subsections. We then performed an adversarial evasion attack on the subsections based on the corresponding SHAP values of the byte sequences.
Lost in the loader: The many faces of the windows pe file format
D. Nisi, M. Graziano, Y. Fratantonio, D. Balzarotti
Published: 2021
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal, Maanak Gupta, Mahmoud Abdelsalam
Published: 1.3.2023
Deceiving end-to-end deep learning malware detectors using adversarial examples
Felix Kreuk, Assi Barak, Shir Aviv-Reuven, Moran Baruch, Benny Pinkas, Joseph Keshet
Published: 2018
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability
Ishai Rosenberg, Shai Meir, Jonathan Berrebi, Ilay Gordon, Guillaume Sicard, Eli David
Published: 9.28.2020
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection
Luca Demetrio, Scott E. Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli
Published: 8.17.2020
Exploring Adversarial Examples in Malware Detection
Octavian Suciu, Scott E. Coull, Jeffrey Johns
Published: 10.19.2018
Optimization of code caves in malware binaries to evade machine learning detectors
J. Yuste, E. G. Pardo, J. Tapiador
Published: 2022
Intra-section code cave injection for adversarial evasion attacks on windows pe malware file
K. Aryal, M. Gupta, M. Abdelsalam, M. Saleh
Published: 2024
Explainable security
L. Vigano, D. Magazzeni
Published: 2020
Sok: Explainable machine learning for computer security applications
A. Nadeem, et al.
Published: 2023
Analyzing and explaining black-box models for online malware detection
H. Manthena, J. C. Kimmel, M. Abdelsalam, M. Gupta
Published: 2023
Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning
Hyrum S. Anderson, Anant Kharkar, Bobby Filar, David Evans, Phil Roth
Published: 1.27.2018
Adversarial examples for cnn-based malware detectors
B. Chen, Z. Ren, C. Yu, I. Hussain, J. Liu
Published: 2019
A Unified Approach to Interpreting Model Predictions
Scott Lundberg, Su-In Lee
Published: 5.23.2017
A value for n-person games
L. S. Shapley
Published: 1953
Share