These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The recent popularity of machine learning calls for a deeper understanding of
AI security. Amongst the numerous AI threats published so far, poisoning
attacks currently attract considerable attention. In a poisoning attack the
opponent partially tampers the dataset used for learning to mislead the
classifier during the testing phase.
This paper proposes a new protection strategy against poisoning attacks. The
technique relies on a new primitive called keyed non-parametric hypothesis
tests allowing to evaluate under adversarial conditions the training input's
conformance with a previously learned distribution $\mathfrak{D}$. To do so we
use a secret key $\kappa$ unknown to the opponent.
Keyed non-parametric hypothesis tests differs from classical tests in that
the secrecy of $\kappa$ prevents the opponent from misleading the keyed test
into concluding that a (significantly) tampered dataset belongs to
$\mathfrak{D}$.