Prompt Injection

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition

Authors: Sander Schulhoff, Jeremy Pinto, Anaum Khan, Louis-François Bouchard, Chenglei Si, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, Christopher Carnahan, Jordan Boyd-Graber | Published: 2023-10-24 | Updated: 2024-03-03
Text Generation Method
Prompt Injection
Attack Method

SoK: Memorization in General-Purpose Large Language Models

Authors: Valentin Hartmann, Anshuman Suri, Vincent Bindschaedler, David Evans, Shruti Tople, Robert West | Published: 2023-10-24
Privacy Technique
Prompt Injection
Measurement of Memorization

AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models

Authors: Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Barrow, Zichao Wang, Furong Huang, Ani Nenkova, Tong Sun | Published: 2023-10-23 | Updated: 2023-12-14
Prompt Injection
Safety Alignment
Attack Method

An LLM can Fool Itself: A Prompt-Based Adversarial Attack

Authors: Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan Kankanhalli | Published: 2023-10-20
Prompt Injection
Malicious Prompt
Adversarial attack

Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework

Authors: Imdad Ullah, Najm Hassan, Sukhpal Singh Gill, Basem Suleiman, Tariq Ahamed Ahanger, Zawar Shah, Junaid Qadir, Salil S. Kanhere | Published: 2023-10-19
Privacy Protection Method
Privacy Technique
Prompt Injection

Attack Prompt Generation for Red Teaming and Defending Large Language Models

Authors: Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, Xiangnan He | Published: 2023-10-19
Prompt Injection
Attack Evaluation
Adversarial Example

Large Language Models for Code Analysis: Do LLMs Really Do Their Job?

Authors: Chongzhou Fang, Ning Miao, Shaurya Srivastav, Jialin Liu, Ruoyu Zhang, Ruijie Fang, Asmita, Ryan Tsang, Najmeh Nazari, Han Wang, Houman Homayoun | Published: 2023-10-18 | Updated: 2024-03-05
Dataset Generation
Program Analysis
Prompt Injection

Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks

Authors: Erfan Shayegani, Md Abdullah Al Mamun, Yu Fu, Pedram Zaree, Yue Dong, Nael Abu-Ghazaleh | Published: 2023-10-16
Prompt Injection
Adversarial Example
Adversarial Training

Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation

Authors: Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, Danqi Chen | Published: 2023-10-10
Prompt Injection
Attack Evaluation
Adversarial attack

LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing

Authors: Stephen Moskal, Sam Laney, Erik Hemberg, Una-May O'Reilly | Published: 2023-10-10
Prompt Injection
Information Gathering Methods
Threat Actor Support