Disabling Safety Mechanisms of LLM

EverTracer: Hunting Stolen Large Language Models via Stealthy and Robust Probabilistic Fingerprint

Authors: Zhenhua Xu, Meng Han, Wenpeng Xing | Published: 2025-09-03
Disabling Safety Mechanisms of LLM
Data Protection Method
Prompt validation

Consiglieres in the Shadow: Understanding the Use of Uncensored Large Language Models in Cybercrimes

Authors: Zilong Lin, Zichuan Li, Xiaojing Liao, XiaoFeng Wang | Published: 2025-08-18
Disabling Safety Mechanisms of LLM
Data Generation Method
Calculation of Output Harmfulness

PRISON: Unmasking the Criminal Potential of Large Language Models

Authors: Xinyi Wu, Geng Hong, Pei Chen, Yueyue Chen, Xudong Pan, Min Yang | Published: 2025-06-19 | Updated: 2025-08-04
Disabling Safety Mechanisms of LLM
法執行回避
Research Methodology

LLMs Cannot Reliably Judge (Yet?): A Comprehensive Assessment on the Robustness of LLM-as-a-Judge

Authors: Songze Li, Chuokun Xu, Jiaying Wang, Xueluan Gong, Chen Chen, Jirui Zhang, Jun Wang, Kwok-Yan Lam, Shouling Ji | Published: 2025-06-11
Disabling Safety Mechanisms of LLM
Prompt Injection
Adversarial attack

Privacy and Security Threat for OpenAI GPTs

Authors: Wei Wenying, Zhao Kaifa, Xue Lei, Fan Ming | Published: 2025-06-04
Disabling Safety Mechanisms of LLM
Privacy Issues
Defense Mechanism

BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage

Authors: Kalyan Nakka, Nitesh Saxena | Published: 2025-06-03
Disabling Safety Mechanisms of LLM
Detection Rate of Phishing Attacks
Prompt Injection

Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space

Authors: Yao Huang, Yitong Sun, Shouwei Ruan, Yichi Zhang, Yinpeng Dong, Xingxing Wei | Published: 2025-05-27
Disabling Safety Mechanisms of LLM
Prompt Injection
Attack Evaluation

Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models

Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques

Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22
Disabling Safety Mechanisms of LLM
Prompt Injection
Watermark Removal Technology

Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs

Authors: Jiawen Wang, Pritha Gupta, Ivan Habernal, Eyke Hüllermeier | Published: 2025-05-20
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection