Disabling Safety Mechanisms of LLM

Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography

Authors: Songze Li, Jiameng Cheng, Yiming Li, Xiaojun Jia, Dacheng Tao | Published: 2025-12-23
Disabling Safety Mechanisms of LLM
Prompt Injection
マルチモーダル安全性

Can LLMs Make (Personalized) Access Control Decisions?

Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25
Disabling Safety Mechanisms of LLM
Privacy Assessment
Prompt Injection

Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation

Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Malicious Prompt

Black-Box Guardrail Reverse-engineering Attack

Authors: Hongwei Yao, Yun Xia, Shuo Shao, Haoran Shi, Tong Qiao, Cong Wang | Published: 2025-11-06
Disabling Safety Mechanisms of LLM
Prompt leaking
Information Security

Death by a Thousand Prompts: Open Model Vulnerability Analysis

Authors: Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan, Adam Swanda | Published: 2025-11-05
Disabling Safety Mechanisms of LLM
Indirect Prompt Injection
Threat modeling

Multimodal Safety Is Asymmetric: Cross-Modal Exploits Unlock Black-Box MLLMs Jailbreaks

Authors: Xinkai Wang, Beibei Li, Zerui Shao, Ao Liu, Shouling Ji | Published: 2025-10-20
Disabling Safety Mechanisms of LLM
Prompt Injection
Malicious Content Generation

Proactive defense against LLM Jailbreak

Authors: Weiliang Zhao, Jinjun Peng, Daniel Ben-Levi, Zhou Yu, Junfeng Yang | Published: 2025-10-06
Disabling Safety Mechanisms of LLM
Prompt Injection
防御手法の統合

LLM Watermark Evasion via Bias Inversion

Authors: Jeongyeon Hwang, Sangdon Park, Jungseul Ok | Published: 2025-09-27 | Updated: 2025-10-01
Disabling Safety Mechanisms of LLM
Model Inversion
Statistical Testing

Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models

Authors: Miao Yu, Zhenhong Zhou, Moayad Aloqaily, Kun Wang, Biwei Huang, Stephen Wang, Yueming Jin, Qingsong Wen | Published: 2025-09-26 | Updated: 2025-09-30
Disabling Safety Mechanisms of LLM
Self-Attention Mechanism
Interpretability

RLCracker: Exposing the Vulnerability of LLM Watermarks with Adaptive RL Attacks

Authors: Hanbo Huang, Yiran Zhang, Hao Zheng, Xuan Gong, Yihan Li, Lin Liu, Shiyu Liang | Published: 2025-09-25
Disabling Safety Mechanisms of LLM
Prompt Injection
Watermark Design