LLMの安全機構の解除

Siege: Autonomous Multi-Turn Jailbreaking of Large Language Models with Tree Search

Authors: Andy Zhou | Published: 2025-03-13 | Updated: 2025-03-16
LLMの安全機構の解除
攻撃手法
生成モデル

CyberLLMInstruct: A Pseudo-malicious Dataset Revealing Safety-performance Trade-offs in Cyber Security LLM Fine-tuning

Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-03-12 | Updated: 2025-09-17
LLMの安全機構の解除
セキュリティ分析
プロンプトインジェクション

A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos

Authors: Yang Yao, Xuan Tong, Ruofan Wang, Yixu Wang, Lujundong Li, Liang Liu, Yan Teng, Yingchun Wang | Published: 2025-02-19 | Updated: 2025-06-03
LLMの安全機構の解除
倫理的考慮
大規模言語モデル

QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language

Authors: Qingsong Zou, Jingyu Xiao, Qing Li, Zhi Yan, Yuhang Wang, Li Xu, Wenxuan Wang, Kuofeng Gao, Ruoyu Li, Yong Jiang | Published: 2025-02-13 | Updated: 2025-05-26
LLMの安全機構の解除
プロンプトリーキング
教育的分析

Dagger Behind Smile: Fool LLMs with a Happy Ending Story

Authors: Xurui Song, Zhixin Xie, Shuo Huai, Jiayi Kong, Jun Luo | Published: 2025-01-19 | Updated: 2025-09-30
LLMの安全機構の解除
悪意のあるプロンプト
攻撃手法の効果

What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks

Authors: Nathalie Kirch, Constantin Weisser, Severin Field, Helen Yannakoudakis, Stephen Casper | Published: 2024-11-02 | Updated: 2025-05-14
LLMの安全機構の解除
プロンプトインジェクション
探索的攻撃

Jailbreaking and Mitigation of Vulnerabilities in Large Language Models

Authors: Benji Peng, Keyu Chen, Qian Niu, Ziqian Bi, Ming Liu, Pohsun Feng, Tianyang Wang, Lawrence K. Q. Yan, Yizhu Wen, Yichao Zhang, Caitlyn Heqi Yin | Published: 2024-10-20 | Updated: 2025-05-08
LLMセキュリティ
LLMの安全機構の解除
プロンプトインジェクション

Feint and Attack: Attention-Based Strategies for Jailbreaking and Protecting LLMs

Authors: Rui Pu, Chaozhuo Li, Rui Ha, Zejian Chen, Litian Zhang, Zheng Liu, Lirong Qiu, Zaisheng Ye | Published: 2024-10-18 | Updated: 2025-07-08
LLMの安全機構の解除
プロンプトインジェクション
プロンプトの検証

Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method

Authors: Weichao Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, Xueqi Cheng | Published: 2024-09-23 | Updated: 2025-04-01
LLMの安全機構の解除
モデル性能評価
情報抽出