Data poisoning attacks compromise the integrity of machine-learning models by
introducing malicious training samples to influence the results during test
time. In this work, we investigate backdoor data poisoning attack on deep
neural networks (DNNs) by inserting a backdoor pattern in the training images.
The resulting attack will misclassify poisoned test samples while maintaining
high accuracies for the clean test-set. We present two approaches for detection
of such poisoned samples by quantifying the uncertainty estimates associated
with the trained models. In the first approach, we model the outputs of the
various layers (deep features) with parametric probability distributions learnt
from the clean held-out dataset. At inference, the likelihoods of deep features
w.r.t these distributions are calculated to derive uncertainty estimates. In
the second approach, we use Bayesian deep neural networks trained with
mean-field variational inference to estimate model uncertainty associated with
the predictions. The uncertainty estimates from these methods are used to
discriminate clean from the poisoned samples.