LLMとの協力効果

Beyond Jailbreaks: Revealing Stealthier and Broader LLM Security Risks Stemming from Alignment Failures

Authors: Yukai Zhou, Sibei Yang, Wenjie Wang | Published: 2025-06-09
LLMとの協力効果
サイバー脅威
大規模言語モデル

TracLLM: A Generic Framework for Attributing Long Context LLMs

Authors: Yanting Wang, Wei Zou, Runpeng Geng, Jinyuan Jia | Published: 2025-06-04
LLMとの協力効果
RAGへのポイズニング攻撃
効率評価

Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity

Authors: Shahroz Tariq, Ronal Singh, Mohan Baruwal Chhetri, Surya Nepal, Cecile Paris | Published: 2025-05-06
LLMとの協力効果
アライメント
参加者の質問分析

Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking Attacks for Large Language Models

Authors: Shangqing Tu, Zhuoran Pan, Wenxuan Wang, Zhexin Zhang, Yuliang Sun, Jifan Yu, Hongning Wang, Lei Hou, Juanzi Li | Published: 2024-06-17 | Updated: 2025-06-09
LLMとの協力効果
プロンプトインジェクション
大規模言語モデル