Disabling Safety Mechanisms of LLM

QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language

Authors: Qingsong Zou, Jingyu Xiao, Qing Li, Zhi Yan, Yuhang Wang, Li Xu, Wenxuan Wang, Kuofeng Gao, Ruoyu Li, Yong Jiang | Published: 2025-02-13 | Updated: 2025-05-26
Disabling Safety Mechanisms of LLM
Prompt leaking
教育的分析

JailbreakLens: Interpreting Jailbreak Mechanism in the Lens of Representation and Circuit

Authors: Zeqing He, Zhibo Wang, Zhixuan Chu, Huiyu Xu, Wenhui Zhang, Qinglong Wang, Rui Zheng | Published: 2024-11-17 | Updated: 2025-04-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Large Language Model

What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks

Authors: Nathalie Kirch, Constantin Weisser, Severin Field, Helen Yannakoudakis, Stephen Casper | Published: 2024-11-02 | Updated: 2025-05-14
Disabling Safety Mechanisms of LLM
Prompt Injection
Exploratory Attack

Jailbreaking and Mitigation of Vulnerabilities in Large Language Models

Authors: Benji Peng, Keyu Chen, Qian Niu, Ziqian Bi, Ming Liu, Pohsun Feng, Tianyang Wang, Lawrence K. Q. Yan, Yizhu Wen, Yichao Zhang, Caitlyn Heqi Yin | Published: 2024-10-20 | Updated: 2025-05-08
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method

Authors: Weichao Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, Xueqi Cheng | Published: 2024-09-23 | Updated: 2025-05-21
Disabling Safety Mechanisms of LLM
Model Performance Evaluation
Information Extraction