Literature Database

CPA-RAG:Covert Poisoning Attacks on Retrieval-Augmented Generation in Large Language Models

Authors: Chunyang Li, Junwei Zhang, Anda Cheng, Zhuo Ma, Xinghua Li, Jianfeng Ma | Published: 2025-05-26
Poisoning attack on RAG
Text Generation Method
Poisoning Attack

What Really Matters in Many-Shot Attacks? An Empirical Study of Long-Context Vulnerabilities in LLMs

Authors: Sangyeop Kim, Yohan Lee, Yongwoo Song, Kimin Lee | Published: 2025-05-26
Prompt Injection
Model Performance Evaluation
Large Language Model

CoTGuard: Using Chain-of-Thought Triggering for Copyright Protection in Multi-Agent LLM Systems

Authors: Yan Wen, Junfeng Guo, Heng Huang | Published: 2025-05-26
LLM Security
トリガーベースの透かし
著作権保護

VADER: A Human-Evaluated Benchmark for Vulnerability Assessment, Detection, Explanation, and Remediation

Authors: Ethan TS. Liu, Austin Wang, Spencer Mateega, Carlos Georgescu, Danny Tang | Published: 2025-05-26
Website Vulnerability
Hallucination
Dynamic Vulnerability Management

Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models

Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Backdoor Cleaning without External Guidance in MLLM Fine-tuning

Authors: Xuankun Rong, Wenke Huang, Jian Liang, Jinhe Bi, Xun Xiao, Yiming Li, Bo Du, Mang Ye | Published: 2025-05-22
LLM Security
Backdoor Attack

CAIN: Hijacking LLM-Humans Conversations via a Two-Stage Malicious System Prompt Generation and Refining Framework

Authors: Viet Pham, Thai Le | Published: 2025-05-22
LLM Security
Prompt Injection
Adversarial Learning

Unlearning Isn’t Deletion: Investigating Reversibility of Machine Unlearning in LLMs

Authors: Xiaoyu Xu, Xiang Yue, Yang Liu, Qingqing Ye, Haibo Hu, Minxin Du | Published: 2025-05-22
Bias Detection in AI Output
Privacy Management
Machine learning

CoTSRF: Utilize Chain of Thought as Stealthy and Robust Fingerprint of Large Language Models

Authors: Zhenzhen Ren, GuoBiao Li, Sheng Li, Zhenxing Qian, Xinpeng Zhang | Published: 2025-05-22
LLM Security
Fingerprinting Method
Model Identification

When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques

Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22
Disabling Safety Mechanisms of LLM
Prompt Injection
Watermark Removal Technology