These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With the growing use of voice-activated systems and speech recognition
technologies, the danger of backdoor attacks on audio data has grown
significantly. This research looks at a specific type of attack, known as a
Stochastic investment-based backdoor attack (MarketBack), in which adversaries
strategically manipulate the stylistic properties of audio to fool speech
recognition systems. The security and integrity of machine learning models are
seriously threatened by backdoor attacks, in order to maintain the reliability
of audio applications and systems, the identification of such attacks becomes
crucial in the context of audio data. Experimental results demonstrated that
MarketBack is feasible to achieve an average attack success rate close to 100%
in seven victim models when poisoning less than 1% of the training data.