Convex relaxations are effective for training and certifying neural networks
against norm-bounded adversarial attacks, but they leave a large gap between
certifiable and empirical robustness. In principle, convex relaxation can
provide tight bounds if the solution to the relaxed problem is feasible for the
original non-convex problem. We propose two regularizers that can be used to
train neural networks that yield tighter convex relaxation bounds for
robustness. In all of our experiments, the proposed regularizers result in
higher certified accuracy than non-regularized baselines.