Deep models, while being extremely flexible and accurate, are surprisingly
vulnerable to "small, imperceptible" perturbations known as adversarial
attacks. While the majority of existing attacks focus on measuring
perturbations under the $\ell_p$ metric, Wasserstein distance, which takes
geometry in pixel space into account, has long been known to be a suitable
metric for measuring image quality and has recently risen as a compelling
alternative to the $\ell_p$ metric in adversarial attacks. However,
constructing an effective attack under the Wasserstein metric is
computationally much more challenging and calls for better optimization
algorithms. We address this gap in two ways: (a) we develop an exact yet
efficient projection operator to enable a stronger projected gradient attack;
(b) we show that the Frank-Wolfe method equipped with a suitable linear
minimization oracle works extremely fast under Wasserstein constraints. Our
algorithms not only converge faster but also generate much stronger attacks.
For instance, we decrease the accuracy of a residual network on CIFAR-10 to
$3.4\%$ within a Wasserstein perturbation ball of radius $0.005$, in contrast
to $65.6\%$ using the previous Wasserstein attack based on an
\emph{approximate} projection operator. Furthermore, employing our stronger
attacks in adversarial training significantly improves the robustness of
adversarially trained models.