Multimodal Safety Is Asymmetric: Cross-Modal Exploits Unlock Black-Box MLLMs Jailbreaks Authors: Xinkai Wang, Beibei Li, Zerui Shao, Ao Liu, Shouling Ji | Published: 2025-10-20 Disabling Safety Mechanisms of LLMPrompt InjectionMalicious Content Generation 2025.10.20 2025.10.22 Literature Database
Proactive defense against LLM Jailbreak Authors: Weiliang Zhao, Jinjun Peng, Daniel Ben-Levi, Zhou Yu, Junfeng Yang | Published: 2025-10-06 Disabling Safety Mechanisms of LLMPrompt Injection防御手法の統合 2025.10.06 2025.10.08 Literature Database
LLM Watermark Evasion via Bias Inversion Authors: Jeongyeon Hwang, Sangdon Park, Jungseul Ok | Published: 2025-09-27 | Updated: 2025-10-01 Disabling Safety Mechanisms of LLMModel InversionStatistical Testing 2025.09.27 2025.10.03 Literature Database
Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models Authors: Miao Yu, Zhenhong Zhou, Moayad Aloqaily, Kun Wang, Biwei Huang, Stephen Wang, Yueming Jin, Qingsong Wen | Published: 2025-09-26 | Updated: 2025-09-30 Disabling Safety Mechanisms of LLMSelf-Attention MechanismInterpretability 2025.09.26 2025.10.02 Literature Database
RLCracker: Exposing the Vulnerability of LLM Watermarks with Adaptive RL Attacks Authors: Hanbo Huang, Yiran Zhang, Hao Zheng, Xuan Gong, Yihan Li, Lin Liu, Shiyu Liang | Published: 2025-09-25 Disabling Safety Mechanisms of LLMPrompt InjectionWatermark Design 2025.09.25 2025.09.27 Literature Database
bi-GRPO: Bidirectional Optimization for Jailbreak Backdoor Injection on LLMs Authors: Wence Ji, Jiancan Wu, Aiying Li, Shuyi Zhang, Junkang Wu, An Zhang, Xiang Wang, Xiangnan He | Published: 2025-09-24 Disabling Safety Mechanisms of LLMPrompt InjectionGenerative Model 2025.09.24 2025.09.26 Literature Database
Send to which account? Evaluation of an LLM-based Scambaiting System Authors: Hossein Siadati, Haadi Jafarian, Sima Jafarikhah | Published: 2025-09-10 Disabling Safety Mechanisms of LLMResearch Methodology詐欺対策 2025.09.10 2025.09.12 Literature Database
Embedding Poisoning: Bypassing Safety Alignment via Embedding Semantic Shift Authors: Shuai Yuan, Zhibo Zhang, Yuxi Li, Guangdong Bai, Wang Kailong | Published: 2025-09-08 Disabling Safety Mechanisms of LLMCalculation of Output HarmfulnessAttack Detection Method 2025.09.08 2025.09.10 Literature Database
EverTracer: Hunting Stolen Large Language Models via Stealthy and Robust Probabilistic Fingerprint Authors: Zhenhua Xu, Meng Han, Wenpeng Xing | Published: 2025-09-03 Disabling Safety Mechanisms of LLMData Protection MethodPrompt validation 2025.09.03 2025.09.05 Literature Database
Consiglieres in the Shadow: Understanding the Use of Uncensored Large Language Models in Cybercrimes Authors: Zilong Lin, Zichuan Li, Xiaojing Liao, XiaoFeng Wang | Published: 2025-08-18 Disabling Safety Mechanisms of LLMData Generation MethodCalculation of Output Harmfulness 2025.08.18 2025.08.20 Literature Database