These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large language models (LLMs) have achieved remarkable performance in various
natural language processing tasks, especially in dialogue systems. However, LLM
may also pose security and moral threats, especially in multi round
conversations where large models are more easily guided by contextual content,
resulting in harmful or biased responses. In this paper, we present a novel
method to attack LLMs in multi-turn dialogues, called CoA (Chain of Attack).
CoA is a semantic-driven contextual multi-turn attack method that adaptively
adjusts the attack policy through contextual feedback and semantic relevance
during multi-turn of dialogue with a large model, resulting in the model
producing unreasonable or harmful content. We evaluate CoA on different LLMs
and datasets, and show that it can effectively expose the vulnerabilities of
LLMs, and outperform existing attack methods. Our work provides a new
perspective and tool for attacking and defending LLMs, and contributes to the
security and ethical assessment of dialogue systems.