PRISON: Unmasking the Criminal Potential of Large Language Models Authors: Xinyi Wu, Geng Hong, Pei Chen, Yueyue Chen, Xudong Pan, Min Yang | Published: 2025-06-19 | Updated: 2025-08-04 Disabling Safety Mechanisms of LLM法執行回避Research Methodology 2025.06.19 2025.08.06 Literature Database
LLMs Cannot Reliably Judge (Yet?): A Comprehensive Assessment on the Robustness of LLM-as-a-Judge Authors: Songze Li, Chuokun Xu, Jiaying Wang, Xueluan Gong, Chen Chen, Jirui Zhang, Jun Wang, Kwok-Yan Lam, Shouling Ji | Published: 2025-06-11 Disabling Safety Mechanisms of LLMPrompt InjectionAdversarial attack 2025.06.11 2025.06.13 Literature Database
Privacy and Security Threat for OpenAI GPTs Authors: Wei Wenying, Zhao Kaifa, Xue Lei, Fan Ming | Published: 2025-06-04 Disabling Safety Mechanisms of LLMPrivacy IssuesDefense Mechanism 2025.06.04 2025.06.06 Literature Database
BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage Authors: Kalyan Nakka, Nitesh Saxena | Published: 2025-06-03 Disabling Safety Mechanisms of LLMDetection Rate of Phishing AttacksPrompt Injection 2025.06.03 2025.06.05 Literature Database
Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space Authors: Yao Huang, Yitong Sun, Shouwei Ruan, Yichi Zhang, Yinpeng Dong, Xingxing Wei | Published: 2025-05-27 Disabling Safety Mechanisms of LLMPrompt InjectionAttack Evaluation 2025.05.27 2025.05.29 Literature Database
Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.22 2025.05.28 Literature Database
When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22 Disabling Safety Mechanisms of LLMPrompt InjectionWatermark Removal Technology 2025.05.22 2025.05.28 Literature Database
Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs Authors: Jiawen Wang, Pritha Gupta, Ivan Habernal, Eyke Hüllermeier | Published: 2025-05-20 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.20 2025.05.28 Literature Database
Exploring Jailbreak Attacks on LLMs through Intent Concealment and Diversion Authors: Tiehan Cui, Yanxu Mao, Peipei Liu, Congying Liu, Datao You | Published: 2025-05-20 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.20 2025.05.28 Literature Database
PandaGuard: Systematic Evaluation of LLM Safety against Jailbreaking Attacks Authors: Guobin Shen, Dongcheng Zhao, Linghao Feng, Xiang He, Jihang Wang, Sicheng Shen, Haibo Tong, Yiting Dong, Jindong Li, Xiang Zheng, Yi Zeng | Published: 2025-05-20 | Updated: 2025-05-22 Disabling Safety Mechanisms of LLMPrompt InjectionEffectiveness Analysis of Defense Methods 2025.05.20 2025.05.28 Literature Database