It has been consistently reported that many machine learning models are
susceptible to adversarial attacks i.e., small additive adversarial
perturbations applied to data points can cause misclassification. Adversarial
training using empirical risk minimization is considered to be the
state-of-the-art method for defense against adversarial attacks. Despite being
successful in practice, several problems in understanding generalization
performance of adversarial training remain open. In this paper, we derive
precise theoretical predictions for the performance of adversarial training in
binary classification. We consider the high-dimensional regime where the
dimension of data grows with the size of the training data-set at a constant
ratio. Our results provide exact asymptotics for standard and adversarial test
errors of the estimators obtained by adversarial training with $\ell_q$-norm
bounded perturbations ($q \ge 1$) for both discriminative binary models and
generative Gaussian-mixture models with correlated features. Furthermore, we
use these sharp predictions to uncover several intriguing observations on the
role of various parameters including the over-parameterization ratio, the data
model, and the attack budget on the adversarial and standard errors.