Security Concerns for Large Language Models: A Survey Authors: Miles Q. Li, Benjamin C. M. Fung | Published: 2025-05-24 | Updated: 2025-08-20 Indirect Prompt InjectionPrompt InjectionPsychological Manipulation 2025.05.24 2025.08.22 Literature Database
Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.22 2025.05.28 Literature Database
CAIN: Hijacking LLM-Humans Conversations via a Two-Stage Malicious System Prompt Generation and Refining Framework Authors: Viet Pham, Thai Le | Published: 2025-05-22 LLM SecurityPrompt InjectionAdversarial Learning 2025.05.22 2025.05.28 Literature Database
When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22 Disabling Safety Mechanisms of LLMPrompt InjectionWatermark Removal Technology 2025.05.22 2025.05.28 Literature Database
BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models Authors: Xiaobei Yan, Yiming Li, Zhaoxin Fan, Han Qiu, Tianwei Zhang | Published: 2025-05-22 LLM SecurityText Generation MethodPrompt Injection 2025.05.22 2025.05.28 Literature Database
Finetuning-Activated Backdoors in LLMs Authors: Thibaud Gloaguen, Mark Vero, Robin Staab, Martin Vechev | Published: 2025-05-22 LLM SecurityBackdoor AttackPrompt Injection 2025.05.22 2025.05.28 Literature Database
DuFFin: A Dual-Level Fingerprinting Framework for LLMs IP Protection Authors: Yuliang Yan, Haochun Tang, Shuo Yan, Enyan Dai | Published: 2025-05-22 Fingerprinting MethodPrompt InjectionModel Identification 2025.05.22 2025.05.28 Literature Database
Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses Authors: Xiaoxue Yang, Bozhidar Stevanoski, Matthieu Meeus, Yves-Alexandre de Montjoye | Published: 2025-05-21 AlignmentPrompt InjectionDefense Mechanism 2025.05.21 2025.05.28 Literature Database
sudoLLM : On Multi-role Alignment of Language Models Authors: Soumadeep Saha, Akshay Chaturvedi, Joy Mahapatra, Utpal Garain | Published: 2025-05-20 AlignmentPrompt InjectionLarge Language Model 2025.05.20 2025.05.28 Literature Database
Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs Authors: Jiawen Wang, Pritha Gupta, Ivan Habernal, Eyke Hüllermeier | Published: 2025-05-20 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.20 2025.05.28 Literature Database