Prompt Injection

Mark My Words: Analyzing and Evaluating Language Model Watermarks

Authors: Julien Piet, Chawin Sitawarin, Vivian Fang, Norman Mu, David Wagner | Published: 2023-12-01 | Updated: 2024-10-11
Prompt Injection
Watermark Robustness
Watermark Evaluation

Scalable Extraction of Training Data from (Production) Language Models

Authors: Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee | Published: 2023-11-28
Data Leakage
Training Data Extraction Method
Prompt Injection

Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

Authors: Sonali Singh, Faranak Abri, Akbar Siami Namin | Published: 2023-11-24
Abuse of AI Chatbots
Prompt Injection
Psychological Manipulation

Transfer Attacks and Defenses for Large Language Models on Coding Tasks

Authors: Chi Zhang, Zifan Wang, Ravi Mangal, Matt Fredrikson, Limin Jia, Corina Pasareanu | Published: 2023-11-22
Prompt Injection
Adversarial attack
Defense Method

Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems

Authors: Guangjing Wang, Ce Zhou, Yuanda Wang, Bocheng Chen, Hanqing Guo, Qiben Yan | Published: 2023-11-20
Prompt Injection
Poisoning
Transfer Learning

Assessing Prompt Injection Risks in 200+ Custom GPTs

Authors: Jiahao Yu, Yuhang Wu, Dong Shu, Mingyu Jin, Sabrina Yang, Xinyu Xing | Published: 2023-11-20 | Updated: 2024-05-25
Prompt Injection
Prompt leaking
Dialogue System

Token-Level Adversarial Prompt Detection Based on Perplexity Measures and Contextual Information

Authors: Zhengmian Hu, Gang Wu, Saayan Mitra, Ruiyi Zhang, Tong Sun, Heng Huang, Viswanathan Swaminathan | Published: 2023-11-20 | Updated: 2024-02-18
Prompt Injection
Prompt validation
Robustness Evaluation

Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework

Authors: Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si | Published: 2023-11-16 | Updated: 2024-08-18
Prompt Injection
Multilingual LLM Jailbreak
Adversarial attack

Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections

Authors: Yuanpu Cao, Bochuan Cao, Jinghui Chen | Published: 2023-11-15 | Updated: 2024-06-09
Backdoor Attack
Prompt Injection

Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment

Authors: Haoran Wang, Kai Shu | Published: 2023-11-15 | Updated: 2024-08-15
Prompt Injection
Attack Method
Natural Language Processing