The existence of evasion attacks during the test phase of machine learning
algorithms represents a significant challenge to both their deployment and
understanding. These attacks can be carried out by adding imperceptible
perturbations to inputs to generate adversarial examples and finding effective
defenses and detectors has proven to be difficult. In this paper, we step away
from the attack-defense arms race and seek to understand the limits of what can
be learned in the presence of an evasion adversary. In particular, we extend
the Probably Approximately Correct (PAC)-learning framework to account for the
presence of an adversary. We first define corrupted hypothesis classes which
arise from standard binary hypothesis classes in the presence of an evasion
adversary and derive the Vapnik-Chervonenkis (VC)-dimension for these, denoted
as the adversarial VC-dimension. We then show that sample complexity upper
bounds from the Fundamental Theorem of Statistical learning can be extended to
the case of evasion adversaries, where the sample complexity is controlled by
the adversarial VC-dimension. We then explicitly derive the adversarial
VC-dimension for halfspace classifiers in the presence of a sample-wise
norm-constrained adversary of the type commonly studied for evasion attacks and
show that it is the same as the standard VC-dimension, closing an open
question. Finally, we prove that the adversarial VC-dimension can be either
larger or smaller than the standard VC-dimension depending on the hypothesis
class and adversary, making it an interesting object of study in its own right.