We use distributionally-robust optimization for machine learning to mitigate
the effect of data poisoning attacks. We provide performance guarantees for the
trained model on the original data (not including the poison records) by
training the model for the worst-case distribution on a neighbourhood around
the empirical distribution (extracted from the training dataset corrupted by a
poisoning attack) defined using the Wasserstein distance. We relax the
distributionally-robust machine learning problem by finding an upper bound for
the worst-case fitness based on the empirical sampled-averaged fitness and the
Lipschitz-constant of the fitness function (on the data for given model
parameters) as regularizer. For regression models, we prove that this
regularizer is equal to the dual norm of the model parameters. We use the Wine
Quality dataset, the Boston Housing Market dataset, and the Adult dataset for
demonstrating the results of this paper.