Deterministic neural nets have been shown to learn effective predictors on a
wide range of machine learning problems. However, as the standard approach is
to train the network to minimize a prediction loss, the resultant model remains
ignorant to its prediction confidence. Orthogonally to Bayesian neural nets
that indirectly infer prediction uncertainty through weight uncertainties, we
propose explicit modeling of the same using the theory of subjective logic. By
placing a Dirichlet distribution on the class probabilities, we treat
predictions of a neural net as subjective opinions and learn the function that
collects the evidence leading to these opinions by a deterministic neural net
from data. The resultant predictor for a multi-class classification problem is
another Dirichlet distribution whose parameters are set by the continuous
output of a neural net. We provide a preliminary analysis on how the
peculiarities of our new loss function drive improved uncertainty estimation.
We observe that our method achieves unprecedented success on detection of
out-of-distribution queries and endurance against adversarial perturbations.