LLM Security

Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based Recommender Systems

Authors: Shiyi Yang, Zhibo Hu, Chen Wang, Tong Yu, Xiwei Xu, Liming Zhu, Lina Yao | Published: 2025-03-31
LLM Security
Indirect Prompt Injection
Model DoS

Intelligent IoT Attack Detection Design via ODLLM with Feature Ranking-based Knowledge Base

Authors: Satvik Verma, Qun Wang, E. Wes Bethel | Published: 2025-03-27
DDoS Attack Detection
LLM Security
Network Traffic Analysis

CL-Attack: Textual Backdoor Attacks via Cross-Lingual Triggers

Authors: Jingyi Zheng, Tianyi Hu, Tianshuo Cong, Xinlei He | Published: 2024-12-26 | Updated: 2025-03-31
LLM Security
Backdoor Attack
Vulnerability of Adversarial Examples

Jailbreaking and Mitigation of Vulnerabilities in Large Language Models

Authors: Benji Peng, Keyu Chen, Qian Niu, Ziqian Bi, Ming Liu, Pohsun Feng, Tianyang Wang, Lawrence K. Q. Yan, Yizhu Wen, Yichao Zhang, Caitlyn Heqi Yin | Published: 2024-10-20 | Updated: 2025-05-08
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

SecAlign: Defending Against Prompt Injection with Preference Optimization

Authors: Sizhe Chen, Arman Zharmagambetov, Saeed Mahloujifar, Kamalika Chaudhuri, David Wagner, Chuan Guo | Published: 2024-10-07 | Updated: 2025-01-13
LLM Security
Prompt Injection
Defense Method

Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion

Authors: Guanchu Wang, Yu-Neng Chuang, Ruixiang Tang, Shaochen Zhong, Jiayi Yuan, Hongye Jin, Zirui Liu, Vipin Chaudhary, Shuai Xu, James Caverlee, Xia Hu | Published: 2024-10-06
LLM Security
Cryptography

Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models

Authors: Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng | Published: 2024-10-05
LLM Security
Prompt Injection
Attack Method

Towards Assuring EU AI Act Compliance and Adversarial Robustness of LLMs

Authors: Tomas Bueno Momcilovic, Beat Buesser, Giulio Zizzo, Mark Purcell, Dian Balta | Published: 2024-10-04
AI Compliance
LLM Security
Framework

Developing Assurance Cases for Adversarial Robustness and Regulatory Compliance in LLMs

Authors: Tomas Bueno Momcilovic, Dian Balta, Beat Buesser, Giulio Zizzo, Mark Purcell | Published: 2024-10-04
LLM Security
Prompt Injection
Dynamic Vulnerability Management

Optimizing Adaptive Attacks against Content Watermarks for Language Models

Authors: Abdulrahman Diaa, Toluwani Aremu, Nils Lukas | Published: 2024-10-03
LLM Security
Watermarking
Prompt Injection