Disabling Safety Mechanisms of LLM

Privacy and Security Threat for OpenAI GPTs

Authors: Wei Wenying, Zhao Kaifa, Xue Lei, Fan Ming | Published: 2025-06-04
Disabling Safety Mechanisms of LLM
Privacy Issues
Defense Mechanism

BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage

Authors: Kalyan Nakka, Nitesh Saxena | Published: 2025-06-03
Disabling Safety Mechanisms of LLM
Detection Rate of Phishing Attacks
Prompt Injection

Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space

Authors: Yao Huang, Yitong Sun, Shouwei Ruan, Yichi Zhang, Yinpeng Dong, Xingxing Wei | Published: 2025-05-27
Disabling Safety Mechanisms of LLM
Prompt Injection
Attack Evaluation

Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models

Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques

Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22
Disabling Safety Mechanisms of LLM
Prompt Injection
Watermark Removal Technology

Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs

Authors: Jiawen Wang, Pritha Gupta, Ivan Habernal, Eyke Hüllermeier | Published: 2025-05-20
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Exploring Jailbreak Attacks on LLMs through Intent Concealment and Diversion

Authors: Tiehan Cui, Yanxu Mao, Peipei Liu, Congying Liu, Datao You | Published: 2025-05-20
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

PandaGuard: Systematic Evaluation of LLM Safety against Jailbreaking Attacks

Authors: Guobin Shen, Dongcheng Zhao, Linghao Feng, Xiang He, Jihang Wang, Sicheng Shen, Haibo Tong, Yiting Dong, Jindong Li, Xiang Zheng, Yi Zeng | Published: 2025-05-20 | Updated: 2025-05-22
Disabling Safety Mechanisms of LLM
Prompt Injection
Effectiveness Analysis of Defense Methods

JULI: Jailbreak Large Language Models by Self-Introspection

Authors: Jesson Wang, Zhanhao Hu, David Wagner | Published: 2025-05-17 | Updated: 2025-05-20
API Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Dark LLMs: The Growing Threat of Unaligned AI Models

Authors: Michael Fire, Yitzhak Elbazis, Adi Wasenstein, Lior Rokach | Published: 2025-05-15
Disabling Safety Mechanisms of LLM
Prompt Injection
Large Language Model