Many machine learning systems rely on data collected in the wild from
untrusted sources, exposing the learning algorithms to data poisoning.
Attackers can inject malicious data in the training dataset to subvert the
learning process, compromising the performance of the algorithm producing
errors in a targeted or an indiscriminate way. Label flipping attacks are a
special case of data poisoning, where the attacker can control the labels
assigned to a fraction of the training points. Even if the capabilities of the
attacker are constrained, these attacks have been shown to be effective to
significantly degrade the performance of the system. In this paper we propose
an efficient algorithm to perform optimal label flipping poisoning attacks and
a mechanism to detect and relabel suspicious data points, mitigating the effect
of such poisoning attacks.