In recent years several adversarial attacks and defenses have been proposed.
Often seemingly robust models turn out to be non-robust when more sophisticated
attacks are used. One way out of this dilemma are provable robustness
guarantees. While provably robust models for specific $l_p$-perturbation models
have been developed, we show that they do not come with any guarantee against
other $l_q$-perturbations. We propose a new regularization scheme,
MMR-Universal, for ReLU networks which enforces robustness wrt $l_1$- and
$l_\infty$-perturbations and show how that leads to the first provably robust
models wrt any $l_p$-norm for $p\geq 1$.