Data poisoning is an attack on machine learning models wherein the attacker
adds examples to the training set to manipulate the behavior of the model at
test time. This paper explores poisoning attacks on neural nets. The proposed
attacks use "clean-labels"; they don't require the attacker to have any control
over the labeling of training data. They are also targeted; they control the
behavior of the classifier on a $\textit{specific}$ test instance without
degrading overall classifier performance. For example, an attacker could add a
seemingly innocuous image (that is properly labeled) to a training set for a
face recognition engine, and control the identity of a chosen person at test
time. Because the attacker does not need to control the labeling function,
poisons could be entered into the training set simply by leaving them on the
web and waiting for them to be scraped by a data collection bot.
We present an optimization-based method for crafting poisons, and show that
just one single poison image can control classifier behavior when transfer
learning is used. For full end-to-end training, we present a "watermarking"
strategy that makes poisoning reliable using multiple ($\approx$50) poisoned
training instances. We demonstrate our method by generating poisoned frog
images from the CIFAR dataset and using them to manipulate image classifiers.