Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs Authors: Chetan Pathade | Published: 2025-05-07 | Updated: 2025-05-13 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.07 2025.05.28 Literature Database
XBreaking: Explainable Artificial Intelligence for Jailbreaking LLMs Authors: Marco Arazzi, Vignesh Kumar Kembu, Antonino Nocera, Vinod P | Published: 2025-04-30 Disabling Safety Mechanisms of LLMPrompt InjectionExplanation Method 2025.04.30 2025.05.27 Literature Database
LLM-IFT: LLM-Powered Information Flow Tracking for Secure Hardware Authors: Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar | Published: 2025-04-09 Disabling Safety Mechanisms of LLMFrameworkEfficient Configuration Verification 2025.04.09 2025.05.27 Literature Database
Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms Authors: Shuoming Zhang, Jiacheng Zhao, Ruiyuan Xu, Xiaobing Feng, Huimin Cui | Published: 2025-03-31 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.03.31 2025.05.27 Literature Database
Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification Authors: Yingjie Zhang, Tong Liu, Zhe Zhao, Guozhu Meng, Kai Chen | Published: 2025-03-14 Disabling Safety Mechanisms of LLMPrompt InjectionMalicious Prompt 2025.03.14 2025.05.27 Literature Database
Tempest: Autonomous Multi-Turn Jailbreaking of Large Language Models with Tree Search Authors: Andy Zhou, Ron Arel | Published: 2025-03-13 | Updated: 2025-05-21 Disabling Safety Mechanisms of LLMAttack MethodGenerative Model 2025.03.13 2025.05.27 Literature Database
CyberLLMInstruct: A Pseudo-malicious Dataset Revealing Safety-performance Trade-offs in Cyber Security LLM Fine-tuning Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-03-12 | Updated: 2025-09-17 Disabling Safety Mechanisms of LLMSecurity AnalysisPrompt Injection 2025.03.12 2025.09.19 Literature Database
A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos Authors: Yang Yao, Xuan Tong, Ruofan Wang, Yixu Wang, Lujundong Li, Liang Liu, Yan Teng, Yingchun Wang | Published: 2025-02-19 | Updated: 2025-06-03 Disabling Safety Mechanisms of LLMEthical ConsiderationsLarge Language Model 2025.02.19 2025.06.05 Literature Database
QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language Authors: Qingsong Zou, Jingyu Xiao, Qing Li, Zhi Yan, Yuhang Wang, Li Xu, Wenxuan Wang, Kuofeng Gao, Ruoyu Li, Yong Jiang | Published: 2025-02-13 | Updated: 2025-05-26 Disabling Safety Mechanisms of LLMPrompt leaking教育的分析 2025.02.13 2025.05.28 Literature Database
Dagger Behind Smile: Fool LLMs with a Happy Ending Story Authors: Xurui Song, Zhixin Xie, Shuo Huai, Jiayi Kong, Jun Luo | Published: 2025-01-19 | Updated: 2025-09-30 Disabling Safety Mechanisms of LLMMalicious Prompt攻撃手法の効果 2025.01.19 2025.10.02 Literature Database