These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With the rapid development of Large Language Models (LLMs), numerous mature
applications of LLMs have emerged in the field of content safety detection.
However, we have found that LLMs exhibit blind trust in safety detection
agents. The general LLMs can be compromised by hackers with this vulnerability.
Hence, this paper proposed an attack named Feign Agent Attack (F2A).Through
such malicious forgery methods, adding fake safety detection results into the
prompt, the defense mechanism of LLMs can be bypassed, thereby obtaining
harmful content and hijacking the normal conversation. Continually, a series of
experiments were conducted. In these experiments, the hijacking capability of
F2A on LLMs was analyzed and demonstrated, exploring the fundamental reasons
why LLMs blindly trust safety detection results. The experiments involved
various scenarios where fake safety detection results were injected into
prompts, and the responses were closely monitored to understand the extent of
the vulnerability. Also, this paper provided a reasonable solution to this
attack, emphasizing that it is important for LLMs to critically evaluate the
results of augmented agents to prevent the generating harmful content. By doing
so, the reliability and security can be significantly improved, protecting the
LLMs from F2A.