Prompt Injection

MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots

Authors: Gelei Deng, Yi Liu, Yuekang Li, Kailong Wang, Ying Zhang, Zefeng Li, Haoyu Wang, Tianwei Zhang, Yang Liu | Published: 2023-07-16 | Updated: 2023-10-25
Data Leakage
Prompt Injection
Watermark Robustness

Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild

Authors: Giuseppe Siracusano, Davide Sanvito, Roberto Gonzalez, Manikantan Srinivasan, Sivakaman Kamatchi, Wataru Takahashi, Masaru Kawakita, Takahiro Kakumaru, Roberto Bifulco | Published: 2023-07-14
Dataset Generation
Prompt Injection
Attack Pattern Extraction

Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots

Authors: Bocheng Chen, Guangjing Wang, Hanqing Guo, Yuanda Wang, Qiben Yan | Published: 2023-07-14
Prompt Injection
Dialogue System
Attack Evaluation

Effective Prompt Extraction from Language Models

Authors: Yiming Zhang, Nicholas Carlini, Daphne Ippolito | Published: 2023-07-13 | Updated: 2024-08-07
Prompt Injection
Prompt leaking
Dialogue System

Jailbroken: How Does LLM Safety Training Fail?

Authors: Alexander Wei, Nika Haghtalab, Jacob Steinhardt | Published: 2023-07-05
Security Assurance
Prompt Injection
Adversarial Attack Methods

On the Exploitability of Instruction Tuning

Authors: Manli Shu, Jiongxiao Wang, Chen Zhu, Jonas Geiping, Chaowei Xiao, Tom Goldstein | Published: 2023-06-28 | Updated: 2023-10-28
Prompt Injection
Poisoning
Adversarial Attack Detection

Are aligned neural networks adversarially aligned?

Authors: Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, Ludwig Schmidt | Published: 2023-06-26 | Updated: 2024-05-06
Prompt Injection
Adversarial Example
Adversarial Attack Methods

ChatIDS: Explainable Cybersecurity Using Generative AI

Authors: Victor Jüttner, Martin Grimmer, Erik Buchmann | Published: 2023-06-26
Online Safety Advice
Prompt Injection
Expert Opinion Collection

On the Uses of Large Language Models to Interpret Ambiguous Cyberattack Descriptions

Authors: Reza Fayyazi, Shanchieh Jay Yang | Published: 2023-06-24 | Updated: 2023-08-22
Prompt Injection
Malware Classification
Natural Language Processing

Visual Adversarial Examples Jailbreak Aligned Large Language Models

Authors: Xiangyu Qi, Kaixuan Huang, Ashwinee Panda, Peter Henderson, Mengdi Wang, Prateek Mittal | Published: 2023-06-22 | Updated: 2023-08-16
Prompt Injection
Inappropriate Content Generation
Adversarial attack