Adversarial training deep neural networks often experience serious
overfitting problem. Recently, it is explained that the overfitting happens
because the sample complexity of training data is insufficient to generalize
robustness. In traditional machine learning, one way to relieve overfitting
from the lack of data is to use ensemble methods. However, adversarial training
multiple networks is extremely expensive. Moreover, we found that there is a
dilemma on choosing target model to generate adversarial examples. Optimizing
attack to the members of ensemble will be suboptimal attack to the ensemble and
incurs covariate shift, while attack to ensemble will weaken the members and
lose the benefit from ensembling. In this paper, we propose adversarial
training with Stochastic weight average (SWA); while performing adversarial
training, we aggregate the temporal weight states in the trajectory of
training. By adopting SWA, the benefit of ensemble can be gained without
tremendous computational increment and without facing the dilemma. Moreover, we
further improved SWA to be adequate to adversarial training. The empirical
results on CIFAR-10, CIFAR-100 and SVHN show that our method can improve the
robustness of models.