These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine Learning as a Service (MLaaS) has gained popularity due to
advancements in Deep Neural Networks (DNNs). However, untrusted third-party
platforms have raised concerns about AI security, particularly in backdoor
attacks. Recent research has shown that speech backdoors can utilize
transformations as triggers, similar to image backdoors. However, human ears
can easily be aware of these transformations, leading to suspicion. In this
paper, we propose PaddingBack, an inaudible backdoor attack that utilizes
malicious operations to generate poisoned samples, rendering them
indistinguishable from clean ones. Instead of using external perturbations as
triggers, we exploit the widely-used speech signal operation, padding, to break
speaker recognition systems. Experimental results demonstrate the effectiveness
of our method, achieving a significant attack success rate while retaining
benign accuracy. Furthermore, PaddingBack demonstrates the ability to resist
defense methods and maintain its stealthiness against human perception.