Neural networks are part of many contemporary NLP systems, yet their
empirical successes come at the price of vulnerability to adversarial attacks.
Previous work has used adversarial training and data augmentation to partially
mitigate such brittleness, but these are unlikely to find worst-case
adversaries due to the complexity of the search space arising from discrete
text perturbations. In this work, we approach the problem from the opposite
direction: to formally verify a system's robustness against a predefined class
of adversarial attacks. We study text classification under synonym replacements
or character flip perturbations. We propose modeling these input perturbations
as a simplex and then using Interval Bound Propagation -- a formal model
verification method. We modify the conventional log-likelihood training
objective to train models that can be efficiently verified, which would
otherwise come with exponential search complexity. The resulting models show
only little difference in terms of nominal accuracy, but have much improved
verified accuracy under perturbations and come with an efficiently computable
formal guarantee on worst case adversaries.