Prompt Injection

Backdoor Attacks for In-Context Learning with Language Models

Authors: Nikhil Kandpal, Matthew Jagielski, Florian Tramèr, Nicholas Carlini | Published: 2023-07-27
LLM Security
Backdoor Attack
Prompt Injection

Unveiling Security, Privacy, and Ethical Concerns of ChatGPT

Authors: Xiaodong Wu, Ran Duan, Jianbing Ni | Published: 2023-07-26
LLM Security
Prompt Injection
Inappropriate Content Generation

Getting pwn’d by AI: Penetration Testing with Large Language Models

Authors: Andreas Happe, Jürgen Cito | Published: 2023-07-24 | Updated: 2023-08-17
LLM Security
Prompt Injection
Penetration Testing Methods

The Looming Threat of Fake and LLM-generated LinkedIn Profiles: Challenges and Opportunities for Detection and Prevention

Authors: Navid Ayoobi, Sadat Shahriar, Arjun Mukherjee | Published: 2023-07-21
Data Generation
Prompt Injection
Analysis of Detection Methods

A LLM Assisted Exploitation of AI-Guardian

Authors: Nicholas Carlini | Published: 2023-07-20
Prompt Injection
Membership Inference
Watermark Robustness

MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots

Authors: Gelei Deng, Yi Liu, Yuekang Li, Kailong Wang, Ying Zhang, Zefeng Li, Haoyu Wang, Tianwei Zhang, Yang Liu | Published: 2023-07-16 | Updated: 2023-10-25
Data Leakage
Prompt Injection
Watermark Robustness

Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild

Authors: Giuseppe Siracusano, Davide Sanvito, Roberto Gonzalez, Manikantan Srinivasan, Sivakaman Kamatchi, Wataru Takahashi, Masaru Kawakita, Takahiro Kakumaru, Roberto Bifulco | Published: 2023-07-14
Dataset Generation
Prompt Injection
Attack Pattern Extraction

Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots

Authors: Bocheng Chen, Guangjing Wang, Hanqing Guo, Yuanda Wang, Qiben Yan | Published: 2023-07-14
Prompt Injection
Dialogue System
Attack Evaluation

Effective Prompt Extraction from Language Models

Authors: Yiming Zhang, Nicholas Carlini, Daphne Ippolito | Published: 2023-07-13 | Updated: 2024-08-07
Prompt Injection
Prompt leaking
Dialogue System

Jailbroken: How Does LLM Safety Training Fail?

Authors: Alexander Wei, Nika Haghtalab, Jacob Steinhardt | Published: 2023-07-05
Security Assurance
Prompt Injection
Adversarial Attack Methods