These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large Language Models (LLMs) are increasingly integrated into consumer and
enterprise applications. Despite their capabilities, they remain susceptible to
adversarial attacks such as prompt injection and jailbreaks that override
alignment safeguards. This paper provides a systematic investigation of
jailbreak strategies against various state-of-the-art LLMs. We categorize over
1,400 adversarial prompts, analyze their success against GPT-4, Claude 2,
Mistral 7B, and Vicuna, and examine their generalizability and construction
logic. We further propose layered mitigation strategies and recommend a hybrid
red-teaming and sandboxing approach for robust LLM security.