Data poisoning attacks aim to manipulate the model produced by a learning
algorithm by adversarially modifying the training set. We consider differential
privacy as a defensive measure against this type of attack. We show that such
learners are resistant to data poisoning attacks when the adversary is only
able to poison a small number of items. However, this protection degrades as
the adversary poisons more data. To illustrate, we design attack algorithms
targeting objective and output perturbation learners, two standard approaches
to differentially-private machine learning. Experiments show that our methods
are effective when the attacker is allowed to poison sufficiently many training
items.