Mark My Words: Analyzing and Evaluating Language Model Watermarks Authors: Julien Piet, Chawin Sitawarin, Vivian Fang, Norman Mu, David Wagner | Published: 2023-12-01 | Updated: 2024-10-11 Prompt InjectionWatermark RobustnessWatermark Evaluation 2023.12.01 2025.05.28 Literature Database
Scalable Extraction of Training Data from (Production) Language Models Authors: Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee | Published: 2023-11-28 Data LeakageTraining Data Extraction MethodPrompt Injection 2023.11.28 2025.05.28 Literature Database
Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles Authors: Sonali Singh, Faranak Abri, Akbar Siami Namin | Published: 2023-11-24 Abuse of AI ChatbotsPrompt InjectionPsychological Manipulation 2023.11.24 2025.05.28 Literature Database
Transfer Attacks and Defenses for Large Language Models on Coding Tasks Authors: Chi Zhang, Zifan Wang, Ravi Mangal, Matt Fredrikson, Limin Jia, Corina Pasareanu | Published: 2023-11-22 Prompt InjectionAdversarial attackDefense Method 2023.11.22 2025.05.28 Literature Database
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems Authors: Guangjing Wang, Ce Zhou, Yuanda Wang, Bocheng Chen, Hanqing Guo, Qiben Yan | Published: 2023-11-20 Prompt InjectionPoisoningTransfer Learning 2023.11.20 2025.05.28 Literature Database
Assessing Prompt Injection Risks in 200+ Custom GPTs Authors: Jiahao Yu, Yuhang Wu, Dong Shu, Mingyu Jin, Sabrina Yang, Xinyu Xing | Published: 2023-11-20 | Updated: 2024-05-25 Prompt InjectionPrompt leakingDialogue System 2023.11.20 2025.05.28 Literature Database
Token-Level Adversarial Prompt Detection Based on Perplexity Measures and Contextual Information Authors: Zhengmian Hu, Gang Wu, Saayan Mitra, Ruiyi Zhang, Tong Sun, Heng Huang, Viswanathan Swaminathan | Published: 2023-11-20 | Updated: 2024-02-18 Prompt InjectionPrompt validationRobustness Evaluation 2023.11.20 2025.05.28 Literature Database
Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework Authors: Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si | Published: 2023-11-16 | Updated: 2024-08-18 Prompt InjectionMultilingual LLM JailbreakAdversarial attack 2023.11.16 2025.05.28 Literature Database
Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections Authors: Yuanpu Cao, Bochuan Cao, Jinghui Chen | Published: 2023-11-15 | Updated: 2024-06-09 Backdoor AttackPrompt Injection 2023.11.15 2025.05.28 Literature Database
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment Authors: Haoran Wang, Kai Shu | Published: 2023-11-15 | Updated: 2024-08-15 Prompt InjectionAttack MethodNatural Language Processing 2023.11.15 2025.05.28 Literature Database