敵対的攻撃分析

When Reject Turns into Accept: Quantifying the Vulnerability of LLM-Based Scientific Reviewers to Indirect Prompt Injection

Authors: Devanshu Sahoo, Manish Prasad, Vasudev Majhi, Jahnvi Singh, Vinay Chamola, Yash Sinha, Murari Mandal, Dhruv Kumar | Published: 2025-12-11
インダイレクトプロンプトインジェクション
敵対的攻撃分析
評価手法

DUMB and DUMBer: Is Adversarial Training Worth It in the Real World?

Authors: Francesco Marchiori, Marco Alecci, Luca Pajola, Mauro Conti | Published: 2025-06-23
モデルアーキテクチャ
モデルの頑健性保証
敵対的攻撃分析

Exploring Backdoor Attack and Defense for LLM-empowered Recommendations

Authors: Liangbo Ning, Wenqi Fan, Qing Li | Published: 2025-04-15
LLM性能評価
RAGへのポイズニング攻撃
敵対的攻撃分析

Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails

Authors: William Hackett, Lewis Birch, Stefan Trawicki, Neeraj Suri, Peter Garraghan | Published: 2025-04-15
LLM性能評価
プロンプトインジェクション
敵対的攻撃分析

Adversarial Attacks Against Medical Deep Learning Systems

Authors: Samuel G. Finlayson, Hyung Won Chung, Isaac S. Kohane, Andrew L. Beam | Published: 2018-04-15 | Updated: 2019-02-04
敵対的学習
敵対的攻撃分析
深層学習

A Grid Based Adversarial Clustering Algorithm

Authors: Wutao Wei, Nikhil Gupta, Bowei Xi | Published: 2018-04-13 | Updated: 2024-11-21
データ汚染検出
敵対的攻撃分析
異常検知手法

Label Sanitization against Label Flipping Poisoning Attacks

Authors: Andrea Paudice, Luis Muñoz-González, Emil C. Lupu | Published: 2018-03-02 | Updated: 2018-10-02
敵対的攻撃分析
機械学習技術
毒データの検知

Generalized Byzantine-tolerant SGD

Authors: Cong Xie, Oluwasanmi Koyejo, Indranil Gupta | Published: 2018-02-27 | Updated: 2018-03-23
ロバスト推定
敵対的攻撃分析
機械学習技術

Understanding and Enhancing the Transferability of Adversarial Examples

Authors: Lei Wu, Zhanxing Zhu, Cheng Tai, Weinan E | Published: 2018-02-27
モデル評価手法
敵対的学習
敵対的攻撃分析

Robust GANs against Dishonest Adversaries

Authors: Zhi Xu, Chengtao Li, Stefanie Jegelka | Published: 2018-02-27 | Updated: 2019-10-10
ロバスト推定
敵対的攻撃分析
敵対的訓練