Machine learning for malware classification shows encouraging results, but
real deployments suffer from performance degradation as malware authors adapt
their techniques to evade detection. This phenomenon, known as concept drift,
occurs as new malware examples evolve and become less and less like the
original training examples. One promising method to cope with concept drift is
classification with rejection in which examples that are likely to be
misclassified are instead quarantined until they can be expertly analyzed.
We propose TRANSCENDENT, a rejection framework built on Transcend, a recently
proposed strategy based on conformal prediction theory. In particular, we
provide a formal treatment of Transcend, enabling us to refine conformal
evaluation theory -- its underlying statistical engine -- and gain a better
understanding of the theoretical reasons for its effectiveness. In the process,
we develop two additional conformal evaluators that match or surpass the
performance of the original while significantly decreasing the computational
overhead. We evaluate TRANSCENDENT on a malware dataset spanning 5 years that
removes sources of experimental bias present in the original evaluation.
TRANSCENDENT outperforms state-of-the-art approaches while generalizing across
different malware domains and classifiers.
To further assist practitioners, we determine the optimal operational
settings for a TRANSCENDENT deployment and show how it can be applied to many
popular learning algorithms. These insights support both old and new empirical
findings, making Transcend a sound and practical solution for the first time.
To this end, we release TRANSCENDENT as open source, to aid the adoption of
rejection strategies by the security community.