Machine learning systems and also, specifically, automatic speech recognition
(ASR) systems are vulnerable against adversarial attacks, where an attacker
maliciously changes the input. In the case of ASR systems, the most interesting
cases are targeted attacks, in which an attacker aims to force the system into
recognizing given target transcriptions in an arbitrary audio sample. The
increasing number of sophisticated, quasi imperceptible attacks raises the
question of countermeasures. In this paper, we focus on hybrid ASR systems and
compare four acoustic models regarding their ability to indicate uncertainty
under attack: a feed-forward neural network and three neural networks
specifically designed for uncertainty quantification, namely a Bayesian neural
network, Monte Carlo dropout, and a deep ensemble. We employ uncertainty
measures of the acoustic model to construct a simple one-class classification
model for assessing whether inputs are benign or adversarial. Based on this
approach, we are able to detect adversarial examples with an area under the
receiving operator curve score of more than 0.99. The neural networks for
uncertainty quantification simultaneously diminish the vulnerability to the
attack, which is reflected in a lower recognition accuracy of the malicious
target text in comparison to a standard hybrid ASR system.