SecAlign: Defending Against Prompt Injection with Preference Optimization Authors: Sizhe Chen, Arman Zharmagambetov, Saeed Mahloujifar, Kamalika Chaudhuri, David Wagner, Chuan Guo | Published: 2024-10-07 | Updated: 2025-01-13 LLM SecurityPrompt InjectionDefense Method 2024.10.07 2025.05.27 Literature Database
Enhancing Android Malware Detection: The Influence of ChatGPT on Decision-centric Task Authors: Yao Li, Sen Fang, Tao Zhang, Haipeng Cai | Published: 2024-10-06 Prompt InjectionMalware Classification 2024.10.06 2025.05.27 Literature Database
Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models Authors: Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng | Published: 2024-10-05 LLM SecurityPrompt InjectionAttack Method 2024.10.05 2025.05.27 Literature Database
ASPIRER: Bypassing System Prompts With Permutation-based Backdoors in LLMs Authors: Lu Yan, Siyuan Cheng, Xuan Chen, Kaiyuan Zhang, Guangyu Shen, Zhuo Zhang, Xiangyu Zhang | Published: 2024-10-05 Negative TrainingBackdoor AttackPrompt Injection 2024.10.05 2025.05.27 Literature Database
Developing Assurance Cases for Adversarial Robustness and Regulatory Compliance in LLMs Authors: Tomas Bueno Momcilovic, Dian Balta, Beat Buesser, Giulio Zizzo, Mark Purcell | Published: 2024-10-04 LLM SecurityPrompt InjectionDynamic Vulnerability Management 2024.10.04 2025.05.27 Literature Database
LLM Safeguard is a Double-Edged Sword: Exploiting False Positives for Denial-of-Service Attacks Authors: Qingzhao Zhang, Ziyang Xiong, Z. Morley Mao | Published: 2024-10-03 | Updated: 2025-04-09 Prompt InjectionModel DoS 2024.10.03 2025.05.27 Literature Database
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents Authors: Hanrong Zhang, Jingyuan Huang, Kai Mei, Yifei Yao, Zhenting Wang, Chenlu Zhan, Hongwei Wang, Yongfeng Zhang | Published: 2024-10-03 | Updated: 2025-04-16 Backdoor AttackPrompt Injection 2024.10.03 2025.05.27 Literature Database
Optimizing Adaptive Attacks against Watermarks for Language Models Authors: Abdulrahman Diaa, Toluwani Aremu, Nils Lukas | Published: 2024-10-03 | Updated: 2025-05-21 LLM SecurityWatermarkingPrompt Injection 2024.10.03 2025.05.27 Literature Database
Robust LLM safeguarding via refusal feature adversarial training Authors: Lei Yu, Virginie Do, Karen Hambardzumyan, Nicola Cancedda | Published: 2024-09-30 | Updated: 2025-03-20 Prompt InjectionModel RobustnessAdversarial Learning 2024.09.30 2025.05.27 Literature Database
System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective Authors: Fangzhou Wu, Ethan Cecchetti, Chaowei Xiao | Published: 2024-09-27 | Updated: 2024-10-10 LLM SecurityPrompt InjectionExecution Trace Interference 2024.09.27 2025.05.27 Literature Database