We propose a method for improving adversarial robustness by addition of a new
bounded function just before softmax. Recent studies hypothesize that small
logits (inputs of softmax) by logit regularization can improve adversarial
robustness of deep learning. Following this hypothesis, we analyze norms of
logit vectors at the optimal point under the assumption of universal
approximation and explore new methods for constraining logits by addition of a
bounded function before softmax. We theoretically and empirically reveal that
small logits by addition of a common activation function, e.g., hyperbolic
tangent, do not improve adversarial robustness since input vectors of the
function (pre-logit vectors) can have large norms. From the theoretical
findings, we develop the new bounded function. The addition of our function
improves adversarial robustness because it makes logit and pre-logit vectors
have small norms. Since our method only adds one activation function before
softmax, it is easy to combine our method with adversarial training. Our
experiments demonstrate that our method is comparable to logit regularization
methods in terms of accuracies on adversarially perturbed datasets without
adversarial training. Furthermore, it is superior or comparable to logit
regularization methods and a recent defense method (TRADES) when using
adversarial training.