Despite remarkable success in practice, modern machine learning models have
been found to be susceptible to adversarial attacks that make
human-imperceptible perturbations to the data, but result in serious and
potentially dangerous prediction errors. To address this issue, practitioners
often use adversarial training to learn models that are robust against such
attacks at the cost of higher generalization error on unperturbed test sets.
The conventional wisdom is that more training data should shrink the gap
between the generalization error of adversarially-trained models and standard
models. However, we study the training of robust classifiers for both Gaussian
and Bernoulli models under $\ell_\infty$ attacks, and we prove that more data
may actually increase this gap. Furthermore, our theoretical results identify
if and when additional data will finally begin to shrink the gap. Lastly, we
experimentally demonstrate that our results also hold for linear regression
models, which may indicate that this phenomenon occurs more broadly.