Machine learning models are vulnerable to adversarial examples: minor
perturbations to input samples intended to deliberately cause
misclassification. While an obvious security threat, adversarial examples yield
as well insights about the applied model itself. We investigate adversarial
examples in the context of Bayesian neural network's (BNN's) uncertainty
measures. As these measures are highly non-smooth, we use a smooth Gaussian
process classifier (GPC) as substitute. We show that both confidence and
uncertainty can be unsuspicious even if the output is wrong. Intriguingly, we
find subtle differences in the features influencing uncertainty and confidence
for most tasks.