Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University
These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recently, applications powered by Large Language Models (LLMs) have made
significant strides in tackling complex tasks. By harnessing the advanced
reasoning capabilities and extensive knowledge embedded in LLMs, these
applications can generate detailed action plans that are subsequently executed
by external tools. Furthermore, the integration of retrieval-augmented
generation (RAG) enhances performance by incorporating up-to-date,
domain-specific knowledge into the planning and execution processes. This
approach has seen widespread adoption across various sectors, including
healthcare, finance, and software development. Meanwhile, there are also
growing concerns regarding the security of LLM-based applications. Researchers
have disclosed various attacks, represented by jailbreak and prompt injection,
to hijack the output actions of these applications. Existing attacks mainly
focus on crafting semantically harmful prompts, and their validity could
diminish when security filters are employed. In this paper, we introduce
AI$\mathbf{^2}$, a novel attack to manipulate the action plans of LLM-based
applications. Different from existing solutions, the innovation of
AI$\mathbf{^2}$ lies in leveraging the knowledge from the application's
database to facilitate the construction of malicious but semantically-harmless
prompts. To this end, it first collects action-aware knowledge from the victim
application. Based on such knowledge, the attacker can generate misleading
input, which can mislead the LLM to generate harmful action plans, while
bypassing possible detection mechanisms easily. Our evaluations on three
real-world applications demonstrate the effectiveness of AI$\mathbf{^2}$: it
achieves an average attack success rate of 84.30\% with the best of 99.70\%.
Besides, it gets an average bypass rate of 92.7\% against common safety filters
and 59.45\% against dedicated defense.