Prompt Injection

Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition

Authors: Edoardo Debenedetti, Javier Rando, Daniel Paleka, Silaghi Fineas Florin, Dragos Albastroiu, Niv Cohen, Yuval Lemberg, Reshmi Ghosh, Rui Wen, Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Beguelin, Robin Schmid, Victor Klemm, Takahiro Miki, Chenhao Li, Stefan Kraft, Mario Fritz, Florian Tramèr, Sahar Abdelnabi, Lea Schönherr | Published: 2024-06-12
LLM Security
Prompt Injection
Defense Method

Knowledge Return Oriented Prompting (KROP)

Authors: Jason Martin, Kenneth Yeung | Published: 2024-06-11
LLM Security
Prompt Injection
Attack Method

LLAMAFUZZ: Large Language Model Enhanced Greybox Fuzzing

Authors: Hongxiang Zhang, Yuyang Rong, Yifeng He, Hao Chen | Published: 2024-06-11 | Updated: 2024-06-13
LLM Performance Evaluation
Fuzzing
Prompt Injection

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Authors: Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong | Published: 2024-06-10
LLM Security
Backdoor Attack
Prompt Injection

SecureNet: A Comparative Study of DeBERTa and Large Language Models for Phishing Detection

Authors: Sakshi Mahendru, Tejul Pandit | Published: 2024-06-10
LLM Performance Evaluation
Phishing Detection
Prompt Injection

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

Authors: Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, Peter Henderson | Published: 2024-06-10
LLM Security
Prompt Injection
Safety Alignment

How Alignment and Jailbreak Work: Explain LLM Safety through Intermediate Hidden States

Authors: Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Yongbin Li | Published: 2024-06-09 | Updated: 2024-06-13
LLM Security
Prompt Injection
Compliance with Ethical Guidelines

Adversarial Tuning: Defending Against Jailbreak Attacks for LLMs

Authors: Fan Liu, Zhao Xu, Hao Liu | Published: 2024-06-07
LLM Security
Prompt Injection
Adversarial Training

GENIE: Watermarking Graph Neural Networks for Link Prediction

Authors: Venkata Sai Pranav Bachina, Ankit Gangwal, Aaryan Ajay Sharma, Charu Sharma | Published: 2024-06-07 | Updated: 2025-01-12
Watermarking
Prompt Injection
Watermark Robustness

AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens

Authors: Lin Lu, Hai Yan, Zenghui Yuan, Jiawen Shi, Wenqi Wei, Pin-Yu Chen, Pan Zhou | Published: 2024-06-06
LLM Performance Evaluation
Prompt Injection
Defense Method