大規模言語モデル

The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMs

Authors: Songyang Liu, Chaozhuo Li, Jiameng Qiu, Xi Zhang, Feiran Huang, Litian Zhang, Yiming Hei, Philip S. Yu | Published: 2025-06-06 | Updated: 2025-10-30
アライメント
大規模言語モデル
安全性評価

A Red Teaming Roadmap Towards System-Level Safety

Authors: Zifan Wang, Christina Q. Knight, Jeremy Kritz, Willow E. Primack, Julian Michael | Published: 2025-05-30 | Updated: 2025-06-09
モデルDoS
大規模言語モデル
製品安全性

SafeCOMM: A Study on Safety Degradation in Fine-Tuned Telecom Large Language Models

Authors: Aladin Djuhera, Swanand Ravindra Kadhe, Farhan Ahmed, Syed Zawad, Fernando Koch, Walid Saad, Holger Boche | Published: 2025-05-29 | Updated: 2025-10-27
プロンプトインジェクション
大規模言語モデル
安全性評価

Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models

Authors: Yongcan Yu, Yanbo Wang, Ran He, Jian Liang | Published: 2025-05-28
LLMセキュリティ
プロンプトインジェクション
大規模言語モデル

Deconstructing Obfuscation: A four-dimensional framework for evaluating Large Language Models assembly code deobfuscation capabilities

Authors: Anton Tkachenko, Dmitrij Suskevic, Benjamin Adolphi | Published: 2025-05-26
モデル評価手法
大規模言語モデル
透かし技術

What Really Matters in Many-Shot Attacks? An Empirical Study of Long-Context Vulnerabilities in LLMs

Authors: Sangyeop Kim, Yohan Lee, Yongwoo Song, Kimin Lee | Published: 2025-05-26
プロンプトインジェクション
モデル性能評価
大規模言語モデル

Scalable Defense against In-the-wild Jailbreaking Attacks with Safety Context Retrieval

Authors: Taiye Chen, Zeming Wei, Ang Li, Yisen Wang | Published: 2025-05-21
RAG
大規模言語モデル
防御メカニズム

sudoLLM : On Multi-role Alignment of Language Models

Authors: Soumadeep Saha, Akshay Chaturvedi, Joy Mahapatra, Utpal Garain | Published: 2025-05-20
アライメント
プロンプトインジェクション
大規模言語モデル

Improving LLM Outputs Against Jailbreak Attacks with Expert Model Integration

Authors: Tatia Tsmindashvili, Ana Kolkhidashvili, Dachi Kurtskhalia, Nino Maghlakelidze, Elene Mekvabishvili, Guram Dentoshvili, Orkhan Shamilov, Zaal Gachechiladze, Steven Saporta, David Dachi Choladze | Published: 2025-05-18 | Updated: 2025-08-11
プロンプトインジェクション
大規模言語モデル
性能評価手法

Dark LLMs: The Growing Threat of Unaligned AI Models

Authors: Michael Fire, Yitzhak Elbazis, Adi Wasenstein, Lior Rokach | Published: 2025-05-15
LLMの安全機構の解除
プロンプトインジェクション
大規模言語モデル