LLM Security

Pathway to Secure and Trustworthy ZSM for LLMs: Attacks, Defense, and Opportunities

Authors: Sunder Ali Khowaja, Parus Khuwaja, Kapal Dev, Hussam Al Hamadi, Engin Zeydan | Published: 2024-08-01 | Updated: 2025-01-06
LLM Security
Membership Inference
Trust Evaluation Module

Jailbreaking Text-to-Image Models with LLM-Based Agents

Authors: Yingkai Dong, Zheng Li, Xiangtao Meng, Ning Yu, Shanqing Guo | Published: 2024-08-01 | Updated: 2024-09-09
LLM Security
Prompt Injection
Model Performance Evaluation

SLIP: Securing LLMs IP Using Weights Decomposition

Authors: Yehonathan Refael, Adam Hakim, Lev Greenberg, Tal Aviv, Satya Lokam, Ben Fishman, Shachar Seidman | Published: 2024-07-15 | Updated: 2024-08-01
LLM Security
Watermarking
Secure Communication Channel

TPIA: Towards Target-specific Prompt Injection Attack against Code-oriented Large Language Models

Authors: Yuchen Yang, Hongwei Yao, Bingrun Yang, Yiling He, Yiming Li, Tianwei Zhang, Zhan Qin, Kui Ren, Chun Chen | Published: 2024-07-12 | Updated: 2025-01-16
LLM Security
Prompt Injection
Attack Method

Refusing Safe Prompts for Multi-modal Large Language Models

Authors: Zedian Shao, Hongbin Liu, Yuepeng Hu, Neil Zhenqiang Gong | Published: 2024-07-12 | Updated: 2024-09-05
LLM Security
Prompt Injection
Evaluation Method

CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models

Authors: Yuetai Li, Zhangchen Xu, Fengqing Jiang, Luyao Niu, Dinuka Sahabandu, Bhaskar Ramasubramanian, Radha Poovendran | Published: 2024-06-18 | Updated: 2025-03-27
LLM Security
Backdoor Attack
Prompt Injection

ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates

Authors: Fengqing Jiang, Zhangchen Xu, Luyao Niu, Bill Yuchen Lin, Radha Poovendran | Published: 2024-06-17 | Updated: 2025-01-07
LLM Security
Prompt Injection
Vulnerability Management

Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications

Authors: Stephen Burabari Tete | Published: 2024-06-16
LLM Security
Prompt Injection
Risk Management

Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models

Authors: Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang, Siheng Chen | Published: 2024-06-15
LLM Security
Prompt Injection
Poisoning

RL-JACK: Reinforcement Learning-powered Black-box Jailbreaking Attack against LLMs

Authors: Xuan Chen, Yuzhou Nie, Lu Yan, Yunshu Mao, Wenbo Guo, Xiangyu Zhang | Published: 2024-06-13
LLM Security
Prompt Injection
Reinforcement Learning