Adversarial training is a powerful type of defense against adversarial
examples. Previous empirical results suggest that adversarial training requires
wider networks for better performances. However, it remains elusive how neural
network width affects model robustness. In this paper, we carefully examine the
relationship between network width and model robustness. Specifically, we show
that the model robustness is closely related to the tradeoff between natural
accuracy and perturbation stability, which is controlled by the robust
regularization parameter $\lambda$. With the same $\lambda$, wider networks can
achieve better natural accuracy but worse perturbation stability, leading to a
potentially worse overall model robustness. To understand the origin of this
phenomenon, we further relate the perturbation stability with the network's
local Lipschitzness. By leveraging recent results on neural tangent kernels, we
theoretically show that wider networks tend to have worse perturbation
stability. Our analyses suggest that: 1) the common strategy of first
fine-tuning $\lambda$ on small networks and then directly use it for wide model
training could lead to deteriorated model robustness; 2) one needs to properly
enlarge $\lambda$ to unleash the robustness potential of wider models fully.
Finally, we propose a new Width Adjusted Regularization (WAR) method that
adaptively enlarges $\lambda$ on wide models and significantly saves the tuning
time.