LLMの安全機構の解除

EverTracer: Hunting Stolen Large Language Models via Stealthy and Robust Probabilistic Fingerprint

Authors: Zhenhua Xu, Meng Han, Wenpeng Xing | Published: 2025-09-03
LLMの安全機構の解除
データ保護手法
プロンプトの検証

Consiglieres in the Shadow: Understanding the Use of Uncensored Large Language Models in Cybercrimes

Authors: Zilong Lin, Zichuan Li, Xiaojing Liao, XiaoFeng Wang | Published: 2025-08-18
LLMの安全機構の解除
データ生成手法
出力の有害度の算出

PRISON: Unmasking the Criminal Potential of Large Language Models

Authors: Xinyi Wu, Geng Hong, Pei Chen, Yueyue Chen, Xudong Pan, Min Yang | Published: 2025-06-19 | Updated: 2025-08-04
LLMの安全機構の解除
法執行回避
研究方法論

LLMs Cannot Reliably Judge (Yet?): A Comprehensive Assessment on the Robustness of LLM-as-a-Judge

Authors: Songze Li, Chuokun Xu, Jiaying Wang, Xueluan Gong, Chen Chen, Jirui Zhang, Jun Wang, Kwok-Yan Lam, Shouling Ji | Published: 2025-06-11
LLMの安全機構の解除
プロンプトインジェクション
敵対的攻撃

Privacy and Security Threat for OpenAI GPTs

Authors: Wei Wenying, Zhao Kaifa, Xue Lei, Fan Ming | Published: 2025-06-04
LLMの安全機構の解除
プライバシー問題
防御メカニズム

BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage

Authors: Kalyan Nakka, Nitesh Saxena | Published: 2025-06-03
LLMの安全機構の解除
フィッシング攻撃の検出率
プロンプトインジェクション

Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space

Authors: Yao Huang, Yitong Sun, Shouwei Ruan, Yichi Zhang, Yinpeng Dong, Xingxing Wei | Published: 2025-05-27
LLMの安全機構の解除
プロンプトインジェクション
攻撃の評価

Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models

Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22
LLMセキュリティ
LLMの安全機構の解除
プロンプトインジェクション

When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques

Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22
LLMの安全機構の解除
プロンプトインジェクション
透かし除去技術

Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs

Authors: Jiawen Wang, Pritha Gupta, Ivan Habernal, Eyke Hüllermeier | Published: 2025-05-20
LLMセキュリティ
LLMの安全機構の解除
プロンプトインジェクション