Defense Against Prompt Injection Attack by Leveraging Attack Techniques Authors: Yulin Chen, Haoran Li, Zihao Zheng, Yangqiu Song, Dekai Wu, Bryan Hooi | Published: 2024-11-01 | Updated: 2025-07-22 Indirect Prompt InjectionPrompt InjectionAttack Method 2024.11.01 2025.07.24 Literature Database
Jailbreaking and Mitigation of Vulnerabilities in Large Language Models Authors: Benji Peng, Keyu Chen, Qian Niu, Ziqian Bi, Ming Liu, Pohsun Feng, Tianyang Wang, Lawrence K. Q. Yan, Yizhu Wen, Yichao Zhang, Caitlyn Heqi Yin | Published: 2024-10-20 | Updated: 2025-05-08 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2024.10.20 2025.05.27 Literature Database
Feint and Attack: Attention-Based Strategies for Jailbreaking and Protecting LLMs Authors: Rui Pu, Chaozhuo Li, Rui Ha, Zejian Chen, Litian Zhang, Zheng Liu, Lirong Qiu, Zaisheng Ye | Published: 2024-10-18 | Updated: 2025-07-08 Disabling Safety Mechanisms of LLMPrompt InjectionPrompt validation 2024.10.18 2025.07.10 Literature Database
Reconstruction of Differentially Private Text Sanitization via Large Language Models Authors: Shuchao Pang, Zhigang Lu, Haichen Wang, Peng Fu, Yongbin Zhou, Minhui Xue | Published: 2024-10-16 | Updated: 2025-09-18 Privacy AnalysisPrompt InjectionPrompt leaking 2024.10.16 2025.09.20 Literature Database
Denial-of-Service Poisoning Attacks against Large Language Models Authors: Kuofeng Gao, Tianyu Pang, Chao Du, Yong Yang, Shu-Tao Xia, Min Lin | Published: 2024-10-14 Prompt InjectionModel DoSResource Scarcity Issues 2024.10.14 2025.05.27 Literature Database
On Calibration of LLM-based Guard Models for Reliable Content Moderation Authors: Hongfu Liu, Hengguan Huang, Hao Wang, Xiangming Gu, Ye Wang | Published: 2024-10-14 LLM Performance EvaluationContent ModerationPrompt Injection 2024.10.14 2025.05.27 Literature Database
Can LLMs be Scammed? A Baseline Measurement Study Authors: Udari Madhushani Sehwag, Kelly Patel, Francesca Mosca, Vineeth Ravi, Jessica Staddon | Published: 2024-10-14 LLM Performance EvaluationPrompt InjectionEvaluation Method 2024.10.14 2025.05.27 Literature Database
Survival of the Safest: Towards Secure Prompt Optimization through Interleaved Multi-Objective Evolution Authors: Ankita Sinha, Wendi Cui, Kamalika Das, Jiaxin Zhang | Published: 2024-10-12 Prompt InjectionMulti-Objective Prompt Optimization 2024.10.12 2025.05.27 Literature Database
Can a large language model be a gaslighter? Authors: Wei Li, Luyao Zhu, Yang Song, Ruixi Lin, Rui Mao, Yang You | Published: 2024-10-11 Prompt InjectionSafety AlignmentAttack Method 2024.10.11 2025.05.27 Literature Database
F2A: An Innovative Approach for Prompt Injection by Utilizing Feign Security Detection Agents Authors: Yupeng Ren | Published: 2024-10-11 | Updated: 2024-10-14 Prompt InjectionAttack EvaluationAttack Method 2024.10.11 2025.05.27 Literature Database