Mixup is a popular data augmentation technique based on taking convex
combinations of pairs of examples and their labels. This simple technique has
been shown to substantially improve both the robustness and the generalization
of the trained model. However, it is not well-understood why such improvement
occurs. In this paper, we provide theoretical analysis to demonstrate how using
Mixup in training helps model robustness and generalization. For robustness, we
show that minimizing the Mixup loss corresponds to approximately minimizing an
upper bound of the adversarial loss. This explains why models obtained by Mixup
training exhibits robustness to several kinds of adversarial attacks such as
Fast Gradient Sign Method (FGSM). For generalization, we prove that Mixup
augmentation corresponds to a specific type of data-adaptive regularization
which reduces overfitting. Our analysis provides new insights and a framework
to understand Mixup.