Prompt Injection

LLAMAFUZZ: Large Language Model Enhanced Greybox Fuzzing

Authors: Hongxiang Zhang, Yuyang Rong, Yifeng He, Hao Chen | Published: 2024-06-11 | Updated: 2024-06-13
LLM Performance Evaluation
Fuzzing
Prompt Injection

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Authors: Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong | Published: 2024-06-10
LLM Security
Backdoor Attack
Prompt Injection

SecureNet: A Comparative Study of DeBERTa and Large Language Models for Phishing Detection

Authors: Sakshi Mahendru, Tejul Pandit | Published: 2024-06-10
LLM Performance Evaluation
Phishing Detection
Prompt Injection

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

Authors: Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, Peter Henderson | Published: 2024-06-10
LLM Security
Prompt Injection
Safety Alignment

How Alignment and Jailbreak Work: Explain LLM Safety through Intermediate Hidden States

Authors: Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Yongbin Li | Published: 2024-06-09 | Updated: 2024-06-13
LLM Security
Prompt Injection
Compliance with Ethical Guidelines

Adversarial Tuning: Defending Against Jailbreak Attacks for LLMs

Authors: Fan Liu, Zhao Xu, Hao Liu | Published: 2024-06-07
LLM Security
Prompt Injection
Adversarial Training

GENIE: Watermarking Graph Neural Networks for Link Prediction

Authors: Venkata Sai Pranav Bachina, Ankit Gangwal, Aaryan Ajay Sharma, Charu Sharma | Published: 2024-06-07 | Updated: 2025-01-12
Watermarking
Prompt Injection
Watermark Robustness

AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens

Authors: Lin Lu, Hai Yan, Zenghui Yuan, Jiawen Shi, Wenqi Wei, Pin-Yu Chen, Pan Zhou | Published: 2024-06-06
LLM Performance Evaluation
Prompt Injection
Defense Method

BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents

Authors: Yifei Wang, Dizhan Xue, Shengjie Zhang, Shengsheng Qian | Published: 2024-06-05
LLM Security
Backdoor Attack
Prompt Injection

Safeguarding Large Language Models: A Survey

Authors: Yi Dong, Ronghui Mu, Yanghao Zhang, Siqi Sun, Tianle Zhang, Changshun Wu, Gaojie Jin, Yi Qi, Jinwei Hu, Jie Meng, Saddek Bensalem, Xiaowei Huang | Published: 2024-06-03
LLM Security
Guardrail Method
Prompt Injection