We theoretically analyse the limits of robustness to test-time adversarial
and noisy examples in classification. Our work focuses on deriving bounds which
uniformly apply to all classifiers (i.e all measurable functions from features
to labels) for a given problem. Our contributions are two-fold. (1) We use
optimal transport theory to derive variational formulae for the Bayes-optimal
error a classifier can make on a given classification problem, subject to
adversarial attacks. The optimal adversarial attack is then an optimal
transport plan for a certain binary cost-function induced by the specific
attack model, and can be computed via a simple algorithm based on maximal
matching on bipartite graphs. (2) We derive explicit lower-bounds on the
Bayes-optimal error in the case of the popular distance-based attacks. These
bounds are universal in the sense that they depend on the geometry of the
class-conditional distributions of the data, but not on a particular
classifier. Our results are in sharp contrast with the existing literature,
wherein adversarial vulnerability of classifiers is derived as a consequence of
nonzero ordinary test error.