Prompt Injection

Detecting Language Model Attacks with Perplexity

Authors: Gabriel Alon, Michael Kamfonas | Published: 2023-08-27 | Updated: 2023-11-07
LLM Security
Prompt Injection
Malicious Prompt

Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities

Authors: Maximilian Mozes, Xuanli He, Bennett Kleinberg, Lewis D. Griffin | Published: 2023-08-24
Prompt Injection
Malicious Content Generation
Adversarial Example

Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models

Authors: Fredrik Heiding, Bruce Schneier, Arun Vishwanath, Jeremy Bernstein, Peter S. Park | Published: 2023-08-23 | Updated: 2023-11-30
Phishing
Phishing Attack
Prompt Injection

Time Travel in LLMs: Tracing Data Contamination in Large Language Models

Authors: Shahriar Golchin, Mihai Surdeanu | Published: 2023-08-16 | Updated: 2024-02-21
Data Contamination Detection
Prompt Injection
Natural Language Processing

Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models

Authors: Yugeng Liu, Tianshuo Cong, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang | Published: 2023-08-15 | Updated: 2024-05-06
Prompt Injection
Model Performance Evaluation
Robustness Evaluation

PentestGPT: An LLM-empowered Automatic Penetration Testing Tool

Authors: Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | Published: 2023-08-13 | Updated: 2024-06-02
Prompt Injection
Penetration Testing Methods
Performance Evaluation

An Empirical Study on Using Large Language Models to Analyze Software Supply Chain Security Failures

Authors: Tanmay Singla, Dharun Anandayuvaraj, Kelechi G. Kalu, Taylor R. Schorlemmer, James C. Davis | Published: 2023-08-09
Cyber Attack
Prompt Injection
Model Performance Evaluation

“Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

Authors: Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang | Published: 2023-08-07 | Updated: 2024-05-15
LLM Security
Character Role Acting
Prompt Injection

Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing

Authors: Wai Man Si, Michael Backes, Yang Zhang | Published: 2023-08-07
Watermarking
Prompt Injection
Challenges of Generative Models

PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification

Authors: Hongwei Yao, Jian Lou, Kui Ren, Zhan Qin | Published: 2023-08-05 | Updated: 2023-11-28
Soft Prompt Optimization
Prompt Injection
Watermark Robustness