Deep neural networks, like many other machine learning models, have recently
been shown to lack robustness against adversarially crafted inputs. These
inputs are derived from regular inputs by minor yet carefully selected
perturbations that deceive machine learning models into desired
misclassifications. Existing work in this emerging field was largely specific
to the domain of image classification, since the high-entropy of images can be
conveniently manipulated without changing the images' overall visual
appearance. Yet, it remains unclear how such attacks translate to more
security-sensitive applications such as malware detection - which may pose
significant challenges in sample generation and arguably grave consequences for
failure.
In this paper, we show how to construct highly-effective adversarial sample
crafting attacks for neural networks used as malware classifiers. The
application domain of malware classification introduces additional constraints
in the adversarial sample crafting problem when compared to the computer vision
domain: (i) continuous, differentiable input domains are replaced by discrete,
often binary inputs; and (ii) the loose condition of leaving visual appearance
unchanged is replaced by requiring equivalent functional behavior. We
demonstrate the feasibility of these attacks on many different instances of
malware classifiers that we trained using the DREBIN Android malware data set.
We furthermore evaluate to which extent potential defensive mechanisms against
adversarial crafting can be leveraged to the setting of malware classification.
While feature reduction did not prove to have a positive impact, distillation
and re-training on adversarially crafted samples show promising results.