Prompt Injection

Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models

Authors: Manish Bhatt, Sahana Chennabasappa, Cyrus Nikolaidis, Shengye Wan, Ivan Evtimov, Dominik Gabi, Daniel Song, Faizan Ahmad, Cornelius Aschermann, Lorenzo Fontana, Sasha Frolov, Ravi Prakash Giri, Dhaval Kapil, Yiannis Kozyrakis, David LeBlanc, James Milazzo, Aleksandar Straumann, Gabriel Synnaeve, Varun Vontimitta, Spencer Whitman, Joshua Saxe | Published: 2023-12-07
LLM Security
Cybersecurity
Prompt Injection

Dr. Jekyll and Mr. Hyde: Two Faces of LLMs

Authors: Matteo Gioele Collu, Tom Janssen-Groesbeek, Stefanos Koffas, Mauro Conti, Stjepan Picek | Published: 2023-12-06 | Updated: 2024-10-07
Character Role Acting
Prompt Injection
Poisoning

Tree of Attacks: Jailbreaking Black-Box LLMs Automatically

Authors: Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, Amin Karbasi | Published: 2023-12-04 | Updated: 2024-10-31
Query Generation Method
Prompt Injection
Watermark Evaluation

The Philosopher’s Stone: Trojaning Plugins of Large Language Models

Authors: Tian Dong, Minhui Xue, Guoxing Chen, Rayne Holland, Yan Meng, Shaofeng Li, Zhen Liu, Haojin Zhu | Published: 2023-12-01 | Updated: 2024-09-11
Prompt Injection
Poisoning
Poisoning Attack

Mark My Words: Analyzing and Evaluating Language Model Watermarks

Authors: Julien Piet, Chawin Sitawarin, Vivian Fang, Norman Mu, David Wagner | Published: 2023-12-01 | Updated: 2024-10-11
Prompt Injection
Watermark Robustness
Watermark Evaluation

Scalable Extraction of Training Data from (Production) Language Models

Authors: Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee | Published: 2023-11-28
Data Leakage
Training Data Extraction Method
Prompt Injection

Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

Authors: Sonali Singh, Faranak Abri, Akbar Siami Namin | Published: 2023-11-24
Abuse of AI Chatbots
Prompt Injection
Psychological Manipulation

Transfer Attacks and Defenses for Large Language Models on Coding Tasks

Authors: Chi Zhang, Zifan Wang, Ravi Mangal, Matt Fredrikson, Limin Jia, Corina Pasareanu | Published: 2023-11-22
Prompt Injection
Adversarial attack
Defense Method

Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems

Authors: Guangjing Wang, Ce Zhou, Yuanda Wang, Bocheng Chen, Hanqing Guo, Qiben Yan | Published: 2023-11-20
Prompt Injection
Poisoning
Transfer Learning

Assessing Prompt Injection Risks in 200+ Custom GPTs

Authors: Jiahao Yu, Yuhang Wu, Dong Shu, Mingyu Jin, Sabrina Yang, Xinyu Xing | Published: 2023-11-20 | Updated: 2024-05-25
Prompt Injection
Prompt leaking
Dialogue System