Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms Authors: Shuoming Zhang, Jiacheng Zhao, Ruiyuan Xu, Xiaobing Feng, Huimin Cui | Published: 2025-03-31 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.03.31 2025.05.27 Literature Database
Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification Authors: Yingjie Zhang, Tong Liu, Zhe Zhao, Guozhu Meng, Kai Chen | Published: 2025-03-14 Disabling Safety Mechanisms of LLMPrompt InjectionMalicious Prompt 2025.03.14 2025.05.27 Literature Database
Tempest: Autonomous Multi-Turn Jailbreaking of Large Language Models with Tree Search Authors: Andy Zhou, Ron Arel | Published: 2025-03-13 | Updated: 2025-05-21 Disabling Safety Mechanisms of LLMAttack MethodGenerative Model 2025.03.13 2025.05.27 Literature Database
CyberLLMInstruct: A Pseudo-malicious Dataset Revealing Safety-performance Trade-offs in Cyber Security LLM Fine-tuning Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-03-12 | Updated: 2025-09-17 Disabling Safety Mechanisms of LLMSecurity AnalysisPrompt Injection 2025.03.12 2025.09.19 Literature Database
A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos Authors: Yang Yao, Xuan Tong, Ruofan Wang, Yixu Wang, Lujundong Li, Liang Liu, Yan Teng, Yingchun Wang | Published: 2025-02-19 | Updated: 2025-06-03 Disabling Safety Mechanisms of LLMEthical ConsiderationsLarge Language Model 2025.02.19 2025.06.05 Literature Database
QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language Authors: Qingsong Zou, Jingyu Xiao, Qing Li, Zhi Yan, Yuhang Wang, Li Xu, Wenxuan Wang, Kuofeng Gao, Ruoyu Li, Yong Jiang | Published: 2025-02-13 | Updated: 2025-05-26 Disabling Safety Mechanisms of LLMPrompt leaking教育的分析 2025.02.13 2025.05.28 Literature Database
Dagger Behind Smile: Fool LLMs with a Happy Ending Story Authors: Xurui Song, Zhixin Xie, Shuo Huai, Jiayi Kong, Jun Luo | Published: 2025-01-19 | Updated: 2025-09-30 Disabling Safety Mechanisms of LLMMalicious Prompt攻撃手法の効果 2025.01.19 2025.10.02 Literature Database
JailbreakLens: Interpreting Jailbreak Mechanism in the Lens of Representation and Circuit Authors: Zeqing He, Zhibo Wang, Zhixuan Chu, Huiyu Xu, Wenhui Zhang, Qinglong Wang, Rui Zheng | Published: 2024-11-17 | Updated: 2025-04-24 Disabling Safety Mechanisms of LLMPrompt InjectionLarge Language Model 2024.11.17 2025.05.27 Literature Database
What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks Authors: Nathalie Kirch, Constantin Weisser, Severin Field, Helen Yannakoudakis, Stephen Casper | Published: 2024-11-02 | Updated: 2025-05-14 Disabling Safety Mechanisms of LLMPrompt InjectionExploratory Attack 2024.11.02 2025.05.28 Literature Database
Jailbreaking and Mitigation of Vulnerabilities in Large Language Models Authors: Benji Peng, Keyu Chen, Qian Niu, Ziqian Bi, Ming Liu, Pohsun Feng, Tianyang Wang, Lawrence K. Q. Yan, Yizhu Wen, Yichao Zhang, Caitlyn Heqi Yin | Published: 2024-10-20 | Updated: 2025-05-08 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2024.10.20 2025.05.27 Literature Database