Adversarial poisoning attacks pose huge threats to various machine learning
applications. Especially, the recent accumulative poisoning attacks show that
it is possible to achieve irreparable harm on models via a sequence of
imperceptible attacks followed by a trigger batch. Due to the limited
data-level discrepancy in real-time data streaming, current defensive methods
are indiscriminate in handling the poison and clean samples. In this paper, we
dive into the perspective of model dynamics and propose a novel information
measure, namely, Memorization Discrepancy, to explore the defense via the
model-level information. By implicitly transferring the changes in the data
manipulation to that in the model outputs, Memorization Discrepancy can
discover the imperceptible poison samples based on their distinct dynamics from
the clean samples. We thoroughly explore its properties and propose
Discrepancy-aware Sample Correction (DSC) to defend against accumulative
poisoning attacks. Extensive experiments comprehensively characterized
Memorization Discrepancy and verified its effectiveness. The code is publicly
available at: https://github.com/tmlr-group/Memorization-Discrepancy.