We demonstrate the existence of universal adversarial perturbations, which
can fool a family of audio classification architectures, for both targeted and
untargeted attack scenarios. We propose two methods for finding such
perturbations. The first method is based on an iterative, greedy approach that
is well-known in computer vision: it aggregates small perturbations to the
input so as to push it to the decision boundary. The second method, which is
the main contribution of this work, is a novel penalty formulation, which finds
targeted and untargeted universal adversarial perturbations. Differently from
the greedy approach, the penalty method minimizes an appropriate objective
function on a batch of samples. Therefore, it produces more successful attacks
when the number of training samples is limited. Moreover, we provide a proof
that the proposed penalty method theoretically converges to a solution that
corresponds to universal adversarial perturbations. We also demonstrate that it
is possible to provide successful attacks using the penalty method when only
one sample from the target dataset is available for the attacker. Experimental
results on attacking various 1D CNN architectures have shown attack success
rates higher than 85.0% and 83.1% for targeted and untargeted attacks,
respectively using the proposed penalty method.