These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Predictions made by deep learning models are prone to data perturbations,
adversarial attacks, and out-of-distribution inputs. To build a trusted AI
system, it is therefore critical to accurately quantify the prediction
uncertainties. While current efforts focus on improving uncertainty
quantification accuracy and efficiency, there is a need to identify uncertainty
sources and take actions to mitigate their effects on predictions. Therefore,
we propose to develop explainable and actionable Bayesian deep learning methods
to not only perform accurate uncertainty quantification but also explain the
uncertainties, identify their sources, and propose strategies to mitigate the
uncertainty impacts. Specifically, we introduce a gradient-based uncertainty
attribution method to identify the most problematic regions of the input that
contribute to the prediction uncertainty. Compared to existing methods, the
proposed UA-Backprop has competitive accuracy, relaxed assumptions, and high
efficiency. Moreover, we propose an uncertainty mitigation strategy that
leverages the attribution results as attention to further improve the model
performance. Both qualitative and quantitative evaluations are conducted to
demonstrate the effectiveness of our proposed methods.