These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Deploying robust machine learning models has to account for concept drifts
arising due to the dynamically changing and non-stationary nature of data.
Addressing drifts is particularly imperative in the security domain due to the
ever-evolving threat landscape and lack of sufficiently labeled training data
at the deployment time leading to performance degradation. Recently proposed
concept drift detection methods in literature tackle this problem by
identifying the changes in feature/data distributions and periodically
retraining the models to learn new concepts. While these types of strategies
should absolutely be conducted when possible, they are not robust towards
attacker-induced drifts and suffer from a delay in detecting new attacks. We
aim to address these shortcomings in this work. we propose a robust drift
detector that not only identifies drifted samples but also discovers new
classes as they arrive in an on-line fashion. We evaluate the proposed method
with two security-relevant data sets -- network intrusion data set released in
2018 and APT Command and Control dataset combined with web categorization data.
Our evaluation shows that our drifting detection method is not only highly
accurate but also robust towards adversarial drifts and discovers new classes
from drifted samples.