Prompt Injection

Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game

Authors: Sam Toyer, Olivia Watkins, Ethan Adrian Mendes, Justin Svegliato, Luke Bailey, Tiffany Wang, Isaac Ong, Karim Elmaaroufi, Pieter Abbeel, Trevor Darrell, Alan Ritter, Stuart Russell | Published: 2023-11-02
Prompt Injection
Prompt Engineering
Robustness Evaluation

From Chatbots to PhishBots? — Preventing Phishing scams created using ChatGPT, Google Bard and Claude

Authors: Sayak Saha Roy, Poojitha Thota, Krishna Vamsi Naragam, Shirin Nilizadeh | Published: 2023-10-29 | Updated: 2024-03-10
Dataset Generation
Detection Rate of Phishing Attacks
Prompt Injection

Enhancing Large Language Models for Secure Code Generation: A Dataset-driven Study on Vulnerability Mitigation

Authors: Jiexin Wang, Liuwen Cao, Xitong Luo, Zhiping Zhou, Jiayuan Xie, Adam Jatowt, Yi Cai | Published: 2023-10-25
Security Analysis
Software Security
Prompt Injection

Locally Differentially Private Document Generation Using Zero Shot Prompting

Authors: Saiteja Utpala, Sara Hooker, Pin Yu Chen | Published: 2023-10-24 | Updated: 2023-11-30
Privacy Technique
Prompt Injection
Membership Inference

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition

Authors: Sander Schulhoff, Jeremy Pinto, Anaum Khan, Louis-François Bouchard, Chenglei Si, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, Christopher Carnahan, Jordan Boyd-Graber | Published: 2023-10-24 | Updated: 2024-03-03
Text Generation Method
Prompt Injection
Attack Method

SoK: Memorization in General-Purpose Large Language Models

Authors: Valentin Hartmann, Anshuman Suri, Vincent Bindschaedler, David Evans, Shruti Tople, Robert West | Published: 2023-10-24
Privacy Technique
Prompt Injection
Measurement of Memorization

AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models

Authors: Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Barrow, Zichao Wang, Furong Huang, Ani Nenkova, Tong Sun | Published: 2023-10-23 | Updated: 2023-12-14
Prompt Injection
Safety Alignment
Attack Method

An LLM can Fool Itself: A Prompt-Based Adversarial Attack

Authors: Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan Kankanhalli | Published: 2023-10-20
Prompt Injection
Malicious Prompt
Adversarial attack

Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework

Authors: Imdad Ullah, Najm Hassan, Sukhpal Singh Gill, Basem Suleiman, Tariq Ahamed Ahanger, Zawar Shah, Junaid Qadir, Salil S. Kanhere | Published: 2023-10-19
Privacy Protection Method
Privacy Technique
Prompt Injection

Attack Prompt Generation for Red Teaming and Defending Large Language Models

Authors: Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, Xiangnan He | Published: 2023-10-19
Prompt Injection
Attack Evaluation
Adversarial Example