These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Malware is a major threat to computer systems and imposes many challenges to
cyber security. Targeted threats, such as ransomware, cause millions of dollars
in losses every year. The constant increase of malware infections has been
motivating popular antiviruses (AVs) to develop dedicated detection strategies,
which include meticulously crafted machine learning (ML) pipelines. However,
malware developers unceasingly change their samples' features to bypass
detection. This constant evolution of malware samples causes changes to the
data distribution (i.e., concept drifts) that directly affect ML model
detection rates, something not considered in the majority of the literature
work. In this work, we evaluate the impact of concept drift on malware
classifiers for two Android datasets: DREBIN (about 130K apps) and a subset of
AndroZoo (about 285K apps). We used these datasets to train an Adaptive Random
Forest (ARF) classifier, as well as a Stochastic Gradient Descent (SGD)
classifier. We also ordered all datasets samples using their VirusTotal
submission timestamp and then extracted features from their textual attributes
using two algorithms (Word2Vec and TF-IDF). Then, we conducted experiments
comparing both feature extractors, classifiers, as well as four drift detectors
(DDM, EDDM, ADWIN, and KSWIN) to determine the best approach for real
environments. Finally, we compare some possible approaches to mitigate concept
drift and propose a novel data stream pipeline that updates both the classifier
and the feature extractor. To do so, we conducted a longitudinal evaluation by
(i) classifying malware samples collected over nine years (2009-2018), (ii)
reviewing concept drift detection algorithms to attest its pervasiveness, (iii)
comparing distinct ML approaches to mitigate the issue, and (iv) proposing an
ML data stream pipeline that outperformed literature approaches.