ポイズニング攻撃

Merge Hijacking: Backdoor Attacks to Model Merging of Large Language Models

Authors: Zenghui Yuan, Yangming Xu, Jiawen Shi, Pan Zhou, Lichao Sun | Published: 2025-05-29
LLMセキュリティ
ポイズニング攻撃
モデル保護手法

CPA-RAG:Covert Poisoning Attacks on Retrieval-Augmented Generation in Large Language Models

Authors: Chunyang Li, Junwei Zhang, Anda Cheng, Zhuo Ma, Xinghua Li, Jianfeng Ma | Published: 2025-05-26
RAGへのポイズニング攻撃
テキスト生成手法
ポイズニング攻撃

Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?

Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Ronghua Li | Published: 2025-05-19
LLMセキュリティ
ポイズニング攻撃
ロバスト性の要件

One Shot Dominance: Knowledge Poisoning Attack on Retrieval-Augmented Generation Systems

Authors: Zhiyuan Chang, Mingyang Li, Xiaojun Jia, Junjie Wang, Yuekai Huang, Ziyou Jiang, Yang Liu, Qing Wang | Published: 2025-05-15 | Updated: 2025-05-20
RAGへのポイズニング攻撃
ポイズニング
ポイズニング攻撃

Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender Systems

Authors: Fatemeh Nazary, Yashar Deldjoo, Tommaso di Noia | Published: 2025-01-20
RAGへのポイズニング攻撃
タグ選択戦略
ポイズニング攻撃

Learning to Poison Large Language Models for Downstream Manipulation

Authors: Xiangyu Zhou, Yao Qiang, Saleh Zare Zade, Mohammad Amin Roshani, Prashant Khanduri, Douglas Zytko, Dongxiao Zhu | Published: 2024-02-21 | Updated: 2025-05-29
LLMセキュリティ
バックドア攻撃
ポイズニング攻撃

PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models

Authors: Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia | Published: 2024-02-12 | Updated: 2024-08-13
プロンプトインジェクション
ポイズニング
ポイズニング攻撃

Data Poisoning for In-context Learning

Authors: Pengfei He, Han Xu, Yue Xing, Hui Liu, Makoto Yamada, Jiliang Tang | Published: 2024-02-03 | Updated: 2025-06-02
ポイズニング
ポイズニング攻撃
偽情報の検出

Towards Efficient and Certified Recovery from Poisoning Attacks in Federated Learning

Authors: Yu Jiang, Jiyuan Shen, Ziyao Liu, Chee Wei Tan, Kwok-Yan Lam | Published: 2024-01-16 | Updated: 2024-01-19
ポイズニング
ポイズニング攻撃
連合学習

Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks

Authors: Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Ling Cai, Nathalie Baracaldo | Published: 2023-12-07
LLMセキュリティ
ポイズニング攻撃
モデル性能評価