SecureCAI: Injection-Resilient LLM Assistants for Cybersecurity Operations Authors: Mohammed Himayath Ali, Mohammed Aqib Abdullah, Mohammed Mudassir Uddin, Shahnawaz Alam | Published: 2026-01-12 Indirect Prompt InjectionPrompt InjectionAdversarial Attack Analysis 2026.01.12 2026.01.14 Literature Database
When Bots Take the Bait: Exposing and Mitigating the Emerging Social Engineering Attack in Web Automation Agent Authors: Xinyi Wu, Geng Hong, Yueyue Chen, MingXuan Liu, Feier Jin, Xudong Pan, Jiarun Dai, Baojun Liu | Published: 2026-01-12 Indirect Prompt InjectionPrompt InjectionUser Behavior Analysis 2026.01.12 2026.01.14 Literature Database
Know Thy Enemy: Securing LLMs Against Prompt Injection via Diverse Data Synthesis and Instruction-Level Chain-of-Thought Learning Authors: Zhiyuan Chang, Mingyang Li, Yuekai Huang, Ziyou Jiang, Xiaojun Jia, Qian Xiong, Junjie Wang, Zhaoyang Li, Qing Wang | Published: 2026-01-08 Disabling Safety Mechanisms of LLMIndirect Prompt InjectionPrivacy Protection Method 2026.01.08 2026.01.10 Literature Database
Toward Trustworthy Agentic AI: A Multimodal Framework for Preventing Prompt Injection Attacks Authors: Toqeer Ali Syed, Mishal Ateeq Almutairi, Mahmoud Abdel Moaty | Published: 2025-12-29 Indirect Prompt InjectionPrompt validationマルチモーダル安全性 2025.12.29 2025.12.31 Literature Database
Assessing the Software Security Comprehension of Large Language Models Authors: Mohammed Latif Siddiq, Natalie Sekerak, Antonio Karam, Maria Leal, Arvin Islam-Gomes, Joanna C. S. Santos | Published: 2025-12-24 Indirect Prompt InjectionSecurity Analysis Method脆弱性優先順位付け 2025.12.24 2025.12.26 Literature Database
Beyond Context: Large Language Models Failure to Grasp Users Intent Authors: Ahmed M. Hussain, Salahuddin Salahuddin, Panos Papadimitratos | Published: 2025-12-24 Indirect Prompt Injectionマルチモーダル安全性脆弱性優先順位付け 2025.12.24 2025.12.26 Literature Database
AegisAgent: An Autonomous Defense Agent Against Prompt Injection Attacks in LLM-HARs Authors: Yihan Wang, Huanqi Yang, Shantanu Pal, Weitao Xu | Published: 2025-12-24 Indirect Prompt InjectionPrompt InjectionAdversarial Attack Assessment 2025.12.24 2025.12.26 Literature Database
A Systematic Study of Code Obfuscation Against LLM-based Vulnerability Detection Authors: Xiao Li, Yue Li, Hao Wu, Yue Zhang, Yechao Zhang, Fengyuan Xu, Sheng Zhong | Published: 2025-12-18 Indirect Prompt InjectionPrompt Injection難読化手法 2025.12.18 2025.12.20 Literature Database
Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance Authors: Kaspar Rosager Ludvigsen | Published: 2025-12-18 LLM活用Indirect Prompt InjectionLarge Language Model 2025.12.18 2025.12.20 Literature Database
Love, Lies, and Language Models: Investigating AI’s Role in Romance-Baiting Scams Authors: Gilad Gressel, Rahul Pankajakshan, Shir Rozenfeld, Ling Li, Ivan Franceschini, Krishnahsree Achuthan, Yisroel Mirsky | Published: 2025-12-18 LLM活用Indirect Prompt InjectionSocial Impact 2025.12.18 2025.12.20 Literature Database