Prompt Injection

Instructional Segment Embedding: Improving LLM Safety with Instruction Hierarchy

Authors: Tong Wu, Shujian Zhang, Kaiqiang Song, Silei Xu, Sanqiang Zhao, Ravi Agrawal, Sathish Reddy Indurthi, Chong Xiang, Prateek Mittal, Wenxuan Zhou | Published: 2024-10-09
LLM Performance Evaluation
Prompt Injection

Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems

Authors: Donghyun Lee, Mo Tiwari | Published: 2024-10-09
Prompt Injection
Attack Method
Defense Method

Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders

Authors: David Noever, Forrest McKee | Published: 2024-10-09
Cybersecurity
Prompt Injection
Attack Method

SecAlign: Defending Against Prompt Injection with Preference Optimization

Authors: Sizhe Chen, Arman Zharmagambetov, Saeed Mahloujifar, Kamalika Chaudhuri, David Wagner, Chuan Guo | Published: 2024-10-07 | Updated: 2025-01-13
LLM Security
Prompt Injection
Defense Method

Enhancing Android Malware Detection: The Influence of ChatGPT on Decision-centric Task

Authors: Yao Li, Sen Fang, Tao Zhang, Haipeng Cai | Published: 2024-10-06
Prompt Injection
Malware Classification

Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models

Authors: Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng | Published: 2024-10-05
LLM Security
Prompt Injection
Attack Method

ASPIRER: Bypassing System Prompts With Permutation-based Backdoors in LLMs

Authors: Lu Yan, Siyuan Cheng, Xuan Chen, Kaiyuan Zhang, Guangyu Shen, Zhuo Zhang, Xiangyu Zhang | Published: 2024-10-05
Negative Training
Backdoor Attack
Prompt Injection

Developing Assurance Cases for Adversarial Robustness and Regulatory Compliance in LLMs

Authors: Tomas Bueno Momcilovic, Dian Balta, Beat Buesser, Giulio Zizzo, Mark Purcell | Published: 2024-10-04
LLM Security
Prompt Injection
Dynamic Vulnerability Management

LLM Safeguard is a Double-Edged Sword: Exploiting False Positives for Denial-of-Service Attacks

Authors: Qingzhao Zhang, Ziyang Xiong, Z. Morley Mao | Published: 2024-10-03 | Updated: 2025-04-09
Prompt Injection
Model DoS

Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents

Authors: Hanrong Zhang, Jingyuan Huang, Kai Mei, Yifei Yao, Zhenting Wang, Chenlu Zhan, Hongwei Wang, Yongfeng Zhang | Published: 2024-10-03 | Updated: 2025-04-16
Backdoor Attack
Prompt Injection