Robustness via Referencing: Defending against Prompt Injection Attacks by Referencing the Executed Instruction Authors: Yulin Chen, Haoran Li, Yuan Sui, Yue Liu, Yufei He, Yangqiu Song, Bryan Hooi | Published: 2025-04-29 Indirect Prompt InjectionPrompt validationAttack Method 2025.04.29 2025.05.27 Literature Database
Watermarking Needs Input Repetition Masking Authors: David Khachaturov, Robert Mullins, Ilia Shumailov, Sumanth Dathathri | Published: 2025-04-16 LLM Performance EvaluationPrompt validationWatermark Design 2025.04.16 2025.05.27 Literature Database
Benchmarking Practices in LLM-driven Offensive Security: Testbeds, Metrics, and Experiment Design Authors: Andreas Happe, Jürgen Cito | Published: 2025-04-14 TestbedPrompt validationProgress Tracking 2025.04.14 2025.05.27 Literature Database
Detecting Instruction Fine-tuning Attacks on Language Models using Influence Function Authors: Jiawei Li | Published: 2025-04-12 | Updated: 2025-09-30 Backdoor AttackPrompt validationSentiment Analysis 2025.04.12 2025.10.02 Literature Database
Can Indirect Prompt Injection Attacks Be Detected and Removed? Authors: Yulin Chen, Haoran Li, Yuan Sui, Yufei He, Yue Liu, Yangqiu Song, Bryan Hooi | Published: 2025-02-23 Prompt validationMalicious PromptAttack Method 2025.02.23 2025.05.27 Literature Database
Feint and Attack: Attention-Based Strategies for Jailbreaking and Protecting LLMs Authors: Rui Pu, Chaozhuo Li, Rui Ha, Zejian Chen, Litian Zhang, Zheng Liu, Lirong Qiu, Zaisheng Ye | Published: 2024-10-18 | Updated: 2025-07-08 Disabling Safety Mechanisms of LLMPrompt InjectionPrompt validation 2024.10.18 2025.07.10 Literature Database
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) Authors: Apurv Verma, Satyapriya Krishna, Sebastian Gehrmann, Madhavan Seshadri, Anu Pradhan, Tom Ault, Leslie Barrett, David Rabinowitz, John Doucette, NhatHai Phan | Published: 2024-07-20 | Updated: 2025-07-10 Prompt InjectionPrompt validationAdversarial attack 2024.07.20 2025.07.12 Literature Database
Toxicity Detection for Free Authors: Zhanhao Hu, Julien Piet, Geng Zhao, Jiantao Jiao, David Wagner | Published: 2024-05-29 | Updated: 2024-11-08 Indirect Prompt InjectionPrompt validationMalicious Prompt 2024.05.29 2025.05.27 Literature Database
Large Language Model Sentinel: LLM Agent for Adversarial Purification Authors: Guang Lin, Toshihisa Tanaka, Qibin Zhao | Published: 2024-05-24 | Updated: 2025-04-23 Prompt validationAdversarial Text PurificationDefense Mechanism 2024.05.24 2025.05.27 Literature Database
Token-Level Adversarial Prompt Detection Based on Perplexity Measures and Contextual Information Authors: Zhengmian Hu, Gang Wu, Saayan Mitra, Ruiyi Zhang, Tong Sun, Heng Huang, Viswanathan Swaminathan | Published: 2023-11-20 | Updated: 2024-02-18 Prompt InjectionPrompt validationRobustness Evaluation 2023.11.20 2025.05.28 Literature Database