These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Deep neural networks (NNs) are powerful black box predictors that have
recently achieved impressive performance on a wide spectrum of tasks.
Quantifying predictive uncertainty in NNs is a challenging and yet unsolved
problem. Bayesian NNs, which learn a distribution over weights, are currently
the state-of-the-art for estimating predictive uncertainty; however these
require significant modifications to the training procedure and are
computationally expensive compared to standard (non-Bayesian) NNs. We propose
an alternative to Bayesian NNs that is simple to implement, readily
parallelizable, requires very little hyperparameter tuning, and yields high
quality predictive uncertainty estimates. Through a series of experiments on
classification and regression benchmarks, we demonstrate that our method
produces well-calibrated uncertainty estimates which are as good or better than
approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate
the predictive uncertainty on test examples from known and unknown
distributions, and show that our method is able to express higher uncertainty
on out-of-distribution examples. We demonstrate the scalability of our method
by evaluating predictive uncertainty estimates on ImageNet.