Detecting test samples drawn sufficiently far away from the training
distribution statistically or adversarially is a fundamental requirement for
deploying a good classifier in many real-world machine learning applications.
However, deep neural networks with the softmax classifier are known to produce
highly overconfident posterior distributions even for such abnormal samples. In
this paper, we propose a simple yet effective method for detecting any abnormal
samples, which is applicable to any pre-trained softmax neural classifier. We
obtain the class conditional Gaussian distributions with respect to (low- and
upper-level) features of the deep models under Gaussian discriminant analysis,
which result in a confidence score based on the Mahalanobis distance. While
most prior methods have been evaluated for detecting either out-of-distribution
or adversarial samples, but not both, the proposed method achieves the
state-of-the-art performances for both cases in our experiments. Moreover, we
found that our proposed method is more robust in harsh cases, e.g., when the
training dataset has noisy labels or small number of samples. Finally, we show
that the proposed method enjoys broader usage by applying it to
class-incremental learning: whenever out-of-distribution samples are detected,
our classification rule can incorporate new classes well without further
training deep models.