It is becoming increasingly important to understand the vulnerability of
machine learning models to adversarial attacks. In this paper we study the
feasibility of robust learning from the perspective of computational learning
theory, considering both sample and computational complexity. In particular,
our definition of robust learnability requires polynomial sample complexity. We
start with two negative results. We show that no non-trivial concept class can
be robustly learned in the distribution-free setting against an adversary who
can perturb just a single input bit. We show moreover that the class of
monotone conjunctions cannot be robustly learned under the uniform distribution
against an adversary who can perturb $\omega(\log n)$ input bits. However if
the adversary is restricted to perturbing $O(\log n)$ bits, then the class of
monotone conjunctions can be robustly learned with respect to a general class
of distributions (that includes the uniform distribution). Finally, we provide
a simple proof of the computational hardness of robust learning on the boolean
hypercube. Unlike previous results of this nature, our result does not rely on
another computational model (e.g. the statistical query model) nor on any
hardness assumption other than the existence of a hard learning problem in the
PAC framework.