Indirect Prompt Injection

Monitoring LLM-based Multi-Agent Systems Against Corruptions via Node Evaluation

Authors: Chengcan Wu, Zhixin Zhang, Mingqian Xu, Zeming Wei, Meng Sun | Published: 2025-10-22
Indirect Prompt Injection
エージェント設計
Network Threat Detection

Defending Against Prompt Injection with DataFilter

Authors: Yizhu Wang, Sizhe Chen, Raghad Alkhudair, Basel Alomair, David Wagner | Published: 2025-10-22
Indirect Prompt Injection
Prompt Injection
プロンプトインジェクション攻撃

LLM Agents for Automated Web Vulnerability Reproduction: Are We There Yet?

Authors: Bin Liu, Yanjie Zhao, Guoai Xu, Haoyu Wang | Published: 2025-10-16
Indirect Prompt Injection
エージェント設計
Security Analysis

In-Browser LLM-Guided Fuzzing for Real-Time Prompt Injection Testing in Agentic AI Browsers

Authors: Avihay Cohen | Published: 2025-10-15
Indirect Prompt Injection
Large Language Model
自動生成フレームワーク

TypePilot: Leveraging the Scala Type System for Secure LLM-generated Code

Authors: Alexander Sternfeld, Andrei Kucharavy, Ljiljana Dolamic | Published: 2025-10-13
Indirect Prompt Injection
Security Analysis Method
Prompt leaking

From Defender to Devil? Unintended Risk Interactions Induced by LLM Defenses

Authors: Xiangtao Meng, Tianshuo Cong, Li Wang, Wenyu Chen, Zheng Li, Shanqing Guo, Xiaoyun Wang | Published: 2025-10-09
Alignment
Indirect Prompt Injection
Defense Effectiveness Analysis

Unified Threat Detection and Mitigation Framework (UTDMF): Combating Prompt Injection, Deception, and Bias in Enterprise-Scale Transformers

Authors: Santhosh KumarRavindran | Published: 2025-10-06
Indirect Prompt Injection
Bias Mitigation Techniques
防御手法の統合

Autonomy Matters: A Study on Personalization-Privacy Dilemma in LLM Agents

Authors: Zhiping Zhang, Yi Evie Zhang, Freda Shi, Tianshi Li | Published: 2025-10-06
Indirect Prompt Injection
Privacy-Preserving Machine Learning
User Activity Analysis

Position: Privacy Is Not Just Memorization!

Authors: Niloofar Mireshghallah, Tianshi Li | Published: 2025-10-02
Indirect Prompt Injection
Privacy-Preserving Machine Learning
Privacy Classification

Fine-Tuning Jailbreaks under Highly Constrained Black-Box Settings: A Three-Pronged Approach

Authors: Xiangfang Li, Yu Wang, Bo Li | Published: 2025-10-01 | Updated: 2025-10-09
Indirect Prompt Injection
Prompt leaking
Defense Mechanism