Prompt Injection

Getting pwn’d by AI: Penetration Testing with Large Language Models

Authors: Andreas Happe, Jürgen Cito | Published: 2023-07-24 | Updated: 2023-08-17
LLM Security
Prompt Injection
Penetration Testing Methods

The Looming Threat of Fake and LLM-generated LinkedIn Profiles: Challenges and Opportunities for Detection and Prevention

Authors: Navid Ayoobi, Sadat Shahriar, Arjun Mukherjee | Published: 2023-07-21
Data Generation
Prompt Injection
Analysis of Detection Methods

A LLM Assisted Exploitation of AI-Guardian

Authors: Nicholas Carlini | Published: 2023-07-20
Prompt Injection
Membership Inference
Watermark Robustness

MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots

Authors: Gelei Deng, Yi Liu, Yuekang Li, Kailong Wang, Ying Zhang, Zefeng Li, Haoyu Wang, Tianwei Zhang, Yang Liu | Published: 2023-07-16 | Updated: 2023-10-25
Data Leakage
Prompt Injection
Watermark Robustness

Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild

Authors: Giuseppe Siracusano, Davide Sanvito, Roberto Gonzalez, Manikantan Srinivasan, Sivakaman Kamatchi, Wataru Takahashi, Masaru Kawakita, Takahiro Kakumaru, Roberto Bifulco | Published: 2023-07-14
Dataset Generation
Prompt Injection
Attack Pattern Extraction

Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots

Authors: Bocheng Chen, Guangjing Wang, Hanqing Guo, Yuanda Wang, Qiben Yan | Published: 2023-07-14
Prompt Injection
Dialogue System
Attack Evaluation

Effective Prompt Extraction from Language Models

Authors: Yiming Zhang, Nicholas Carlini, Daphne Ippolito | Published: 2023-07-13 | Updated: 2024-08-07
Prompt Injection
Prompt leaking
Dialogue System

Jailbroken: How Does LLM Safety Training Fail?

Authors: Alexander Wei, Nika Haghtalab, Jacob Steinhardt | Published: 2023-07-05
Security Assurance
Prompt Injection
Adversarial Attack Methods

On the Exploitability of Instruction Tuning

Authors: Manli Shu, Jiongxiao Wang, Chen Zhu, Jonas Geiping, Chaowei Xiao, Tom Goldstein | Published: 2023-06-28 | Updated: 2023-10-28
Prompt Injection
Poisoning
Adversarial Attack Detection

Are aligned neural networks adversarially aligned?

Authors: Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, Ludwig Schmidt | Published: 2023-06-26 | Updated: 2024-05-06
Prompt Injection
Adversarial Example
Adversarial Attack Methods