Many machine learning models are vulnerable to adversarial attacks; for
example, adding adversarial perturbations that are imperceptible to humans can
often make machine learning models produce wrong predictions with high
confidence. Moreover, although we may obtain robust models on the training
dataset via adversarial training, in some problems the learned models cannot
generalize well to the test data. In this paper, we focus on $\ell_\infty$
attacks, and study the adversarially robust generalization problem through the
lens of Rademacher complexity. For binary linear classifiers, we prove tight
bounds for the adversarial Rademacher complexity, and show that the adversarial
Rademacher complexity is never smaller than its natural counterpart, and it has
an unavoidable dimension dependence, unless the weight vector has bounded
$\ell_1$ norm. The results also extend to multi-class linear classifiers. For
(nonlinear) neural networks, we show that the dimension dependence in the
adversarial Rademacher complexity also exists. We further consider a surrogate
adversarial loss for one-hidden layer ReLU network and prove margin bounds for
this setting. Our results indicate that having $\ell_1$ norm constraints on the
weight matrices might be a potential way to improve generalization in the
adversarial setting. We demonstrate experimental results that validate our
theoretical findings.