Toward Trustworthy Agentic AI: A Multimodal Framework for Preventing Prompt Injection Attacks Authors: Toqeer Ali Syed, Mishal Ateeq Almutairi, Mahmoud Abdel Moaty | Published: 2025-12-29 Indirect Prompt InjectionPrompt validationマルチモーダル安全性 2025.12.29 2025.12.31 Literature Database
Assessing the Software Security Comprehension of Large Language Models Authors: Mohammed Latif Siddiq, Natalie Sekerak, Antonio Karam, Maria Leal, Arvin Islam-Gomes, Joanna C. S. Santos | Published: 2025-12-24 Indirect Prompt InjectionSecurity Analysis Method脆弱性優先順位付け 2025.12.24 2025.12.26 Literature Database
Beyond Context: Large Language Models Failure to Grasp Users Intent Authors: Ahmed M. Hussain, Salahuddin Salahuddin, Panos Papadimitratos | Published: 2025-12-24 Indirect Prompt Injectionマルチモーダル安全性脆弱性優先順位付け 2025.12.24 2025.12.26 Literature Database
AegisAgent: An Autonomous Defense Agent Against Prompt Injection Attacks in LLM-HARs Authors: Yihan Wang, Huanqi Yang, Shantanu Pal, Weitao Xu | Published: 2025-12-24 Indirect Prompt InjectionPrompt InjectionAdversarial Attack Assessment 2025.12.24 2025.12.26 Literature Database
A Systematic Study of Code Obfuscation Against LLM-based Vulnerability Detection Authors: Xiao Li, Yue Li, Hao Wu, Yue Zhang, Yechao Zhang, Fengyuan Xu, Sheng Zhong | Published: 2025-12-18 Indirect Prompt InjectionPrompt Injection難読化手法 2025.12.18 2025.12.20 Literature Database
Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance Authors: Kaspar Rosager Ludvigsen | Published: 2025-12-18 LLM活用Indirect Prompt InjectionLarge Language Model 2025.12.18 2025.12.20 Literature Database
Love, Lies, and Language Models: Investigating AI’s Role in Romance-Baiting Scams Authors: Gilad Gressel, Rahul Pankajakshan, Shir Rozenfeld, Ling Li, Ivan Franceschini, Krishnahsree Achuthan, Yisroel Mirsky | Published: 2025-12-18 LLM活用Indirect Prompt InjectionSocial Impact 2025.12.18 2025.12.20 Literature Database
PerProb: Indirectly Evaluating Memorization in Large Language Models Authors: Yihan Liao, Jacky Keung, Xiaoxue Ma, Jingyu Zhang, Yicheng Sun | Published: 2025-12-16 Indirect Prompt InjectionPrivacy protection frameworkPrompt leaking 2025.12.16 2025.12.18 Literature Database
Reasoning-Style Poisoning of LLM Agents via Stealthy Style Transfer: Process-Level Attacks and Runtime Monitoring in RSV Space Authors: Xingfu Zhou, Pengfei Wang | Published: 2025-12-16 Indirect Prompt Injectionスタイル操作プロセス攻撃 2025.12.16 2025.12.18 Literature Database
PentestEval: Benchmarking LLM-based Penetration Testing with Modular and Stage-Level Design Authors: Ruozhao Yang, Mingfei Cheng, Gelei Deng, Tianwei Zhang, Junjie Wang, Xiaofei Xie | Published: 2025-12-16 Indirect Prompt InjectionPrompt InjectionVulnerability Management 2025.12.16 2025.12.18 Literature Database