These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With the burgeoning advancements in the field of natural language processing
(NLP), the demand for training data has increased significantly. To save costs,
it has become common for users and businesses to outsource the labor-intensive
task of data collection to third-party entities. Unfortunately, recent research
has unveiled the inherent risk associated with this practice, particularly in
exposing NLP systems to potential backdoor attacks. Specifically, these attacks
enable malicious control over the behavior of a trained model by poisoning a
small portion of the training data. Unlike backdoor attacks in computer vision,
textual backdoor attacks impose stringent requirements for attack stealthiness.
However, existing attack methods meet significant trade-off between
effectiveness and stealthiness, largely due to the high information entropy
inherent in textual data. In this paper, we introduce the Efficient and
Stealthy Textual backdoor attack method, EST-Bad, leveraging Large Language
Models (LLMs). Our EST-Bad encompasses three core strategies: optimizing the
inherent flaw of models as the trigger, stealthily injecting triggers with
LLMs, and meticulously selecting the most impactful samples for backdoor
injection. Through the integration of these techniques, EST-Bad demonstrates an
efficient achievement of competitive attack performance while maintaining
superior stealthiness compared to prior methods across various text classifier
datasets.