Continuous learning from streaming data is among the most challenging topics
in the contemporary machine learning. In this domain, learning algorithms must
not only be able to handle massive volumes of rapidly arriving data, but also
adapt themselves to potential emerging changes. The phenomenon of the evolving
nature of data streams is known as concept drift. While there is a plethora of
methods designed for detecting its occurrence, all of them assume that the
drift is connected with underlying changes in the source of data. However, one
must consider the possibility of a malicious injection of false data that
simulates a concept drift. This adversarial setting assumes a poisoning attack
that may be conducted in order to damage the underlying classification system
by forcing adaptation to false data. Existing drift detectors are not capable
of differentiating between real and adversarial concept drift. In this paper,
we propose a framework for robust concept drift detection in the presence of
adversarial and poisoning attacks. We introduce the taxonomy for two types of
adversarial concept drifts, as well as a robust trainable drift detector. It is
based on the augmented Restricted Boltzmann Machine with improved gradient
computation and energy function. We also introduce Relative Loss of Robustness
- a novel measure for evaluating the performance of concept drift detectors
under poisoning attacks. Extensive computational experiments, conducted on both
fully and sparsely labeled data streams, prove the high robustness and efficacy
of the proposed drift detection framework in adversarial scenarios.