LLM Security

Merge Hijacking: Backdoor Attacks to Model Merging of Large Language Models

Authors: Zenghui Yuan, Yangming Xu, Jiawen Shi, Pan Zhou, Lichao Sun | Published: 2025-05-29
LLM Security
Poisoning Attack
Model Protection Methods

Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models

Authors: Yongcan Yu, Yanbo Wang, Ran He, Jian Liang | Published: 2025-05-28
LLM Security
Prompt Injection
Large Language Model

VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries

Authors: Nasir Hussain, Haohan Chen, Chanh Tran, Philip Huang, Zhuohao Li, Pravir Chugh, William Chen, Ashish Kundu, Yuan Tian | Published: 2025-05-28
LLM Security
Vulnerability Analysis
逆アセンブル

IRCopilot: Automated Incident Response with Large Language Models

Authors: Xihuan Lin, Jie Zhang, Gelei Deng, Tianzhe Liu, Xiaolong Liu, Changcai Yang, Tianwei Zhang, Qing Guo, Riqing Chen | Published: 2025-05-27
LLM Security
Indirect Prompt Injection
Model DoS

CoTGuard: Using Chain-of-Thought Triggering for Copyright Protection in Multi-Agent LLM Systems

Authors: Yan Wen, Junfeng Guo, Heng Huang | Published: 2025-05-26
LLM Security
トリガーベースの透かし
著作権保護

Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models

Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Backdoor Cleaning without External Guidance in MLLM Fine-tuning

Authors: Xuankun Rong, Wenke Huang, Jian Liang, Jinhe Bi, Xun Xiao, Yiming Li, Bo Du, Mang Ye | Published: 2025-05-22
LLM Security
Backdoor Attack

CAIN: Hijacking LLM-Humans Conversations via a Two-Stage Malicious System Prompt Generation and Refining Framework

Authors: Viet Pham, Thai Le | Published: 2025-05-22
LLM Security
Prompt Injection
Adversarial Learning

CoTSRF: Utilize Chain of Thought as Stealthy and Robust Fingerprint of Large Language Models

Authors: Zhenzhen Ren, GuoBiao Li, Sheng Li, Zhenxing Qian, Xinpeng Zhang | Published: 2025-05-22
LLM Security
Fingerprinting Method
Model Identification

Mitigating Fine-tuning Risks in LLMs via Safety-Aware Probing Optimization

Authors: Chengcan Wu, Zhixin Zhang, Zeming Wei, Yihao Zhang, Meng Sun | Published: 2025-05-22
LLM Security
Alignment
Adversarial Learning