These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Prompt injection attacks exploit vulnerabilities in large language models
(LLMs) to manipulate the model into unintended actions or generate malicious
content. As LLM integrated applications gain wider adoption, they face growing
susceptibility to such attacks. This study introduces a novel evaluation
framework for quantifying the resilience of applications. The framework
incorporates innovative techniques designed to ensure representativeness,
interpretability, and robustness. To ensure the representativeness of simulated
attacks on the application, a meticulous selection process was employed,
resulting in 115 carefully chosen attacks based on coverage and relevance. For
enhanced interpretability, a second LLM was utilized to evaluate the responses
generated from these simulated attacks. Unlike conventional malicious content
classifiers that provide only a confidence score, the LLM-based evaluation
produces a score accompanied by an explanation, thereby enhancing
interpretability. Subsequently, a resilience score is computed by assigning
higher weights to attacks with greater impact, thus providing a robust
measurement of the application resilience. To assess the framework's efficacy,
it was applied on two LLMs, namely Llama2 and ChatGLM. Results revealed that
Llama2, the newer model exhibited higher resilience compared to ChatGLM. This
finding substantiates the effectiveness of the framework, aligning with the
prevailing notion that newer models tend to possess greater resilience.
Moreover, the framework exhibited exceptional versatility, requiring only
minimal adjustments to accommodate emerging attack techniques and
classifications, thereby establishing itself as an effective and practical
solution. Overall, the framework offers valuable insights that empower
organizations to make well-informed decisions to fortify their applications
against potential threats from prompt injection.