Prompt Injection

From Theft to Bomb-Making: The Ripple Effect of Unlearning in Defending Against Jailbreak Attacks

Authors: Zhexin Zhang, Junxiao Yang, Yida Lu, Pei Ke, Shiyao Cui, Chujie Zheng, Hongning Wang, Minlie Huang | Published: 2024-07-03 | Updated: 2025-05-20
Prompt Injection
Large Language Model
法執行回避

On Discrete Prompt Optimization for Diffusion Models

Authors: Ruochen Wang, Ting Liu, Cho-Jui Hsieh, Boqing Gong | Published: 2024-06-27
Watermarking
Prompt Injection
Prompt Engineering

CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models

Authors: Yuetai Li, Zhangchen Xu, Fengqing Jiang, Luyao Niu, Dinuka Sahabandu, Bhaskar Ramasubramanian, Radha Poovendran | Published: 2024-06-18 | Updated: 2025-03-27
LLM Security
Backdoor Attack
Prompt Injection

Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking Attacks for Large Language Models

Authors: Shangqing Tu, Zhuoran Pan, Wenxuan Wang, Zhexin Zhang, Yuliang Sun, Jifan Yu, Hongning Wang, Lei Hou, Juanzi Li | Published: 2024-06-17 | Updated: 2025-06-09
Cooperative Effects with LLM
Prompt Injection
Large Language Model

ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates

Authors: Fengqing Jiang, Zhangchen Xu, Luyao Niu, Bill Yuchen Lin, Radha Poovendran | Published: 2024-06-17 | Updated: 2025-01-07
LLM Security
Prompt Injection
Vulnerability Management

GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory

Authors: Wei Fan, Haoran Li, Zheye Deng, Weiqi Wang, Yangqiu Song | Published: 2024-06-17 | Updated: 2024-10-04
LLM Performance Evaluation
Privacy Protection Method
Prompt Injection

Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications

Authors: Stephen Burabari Tete | Published: 2024-06-16
LLM Security
Prompt Injection
Risk Management

Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models

Authors: Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang, Siheng Chen | Published: 2024-06-15
LLM Security
Prompt Injection
Poisoning

RL-JACK: Reinforcement Learning-powered Black-box Jailbreaking Attack against LLMs

Authors: Xuan Chen, Yuzhou Nie, Lu Yan, Yunshu Mao, Wenbo Guo, Xiangyu Zhang | Published: 2024-06-13
LLM Security
Prompt Injection
Reinforcement Learning

Efficient Network Traffic Feature Sets for IoT Intrusion Detection

Authors: Miguel Silva, João Vitorino, Eva Maia, Isabel Praça | Published: 2024-06-12
Prompt Injection
Model Performance Evaluation
Machine Learning Method