Several recent papers have discussed utilizing Lipschitz constants to limit
the susceptibility of neural networks to adversarial examples. We analyze
recently proposed methods for computing the Lipschitz constant. We show that
the Lipschitz constant may indeed enable adversarially robust neural networks.
However, the methods currently employed for computing it suffer from
theoretical and practical limitations. We argue that addressing this
shortcoming is a promising direction for future research into certified
adversarial defenses.