LeechHijack: Covert Computational Resource Exploitation in Intelligent Agent Systems Authors: Yuanhe Zhang, Weiliu Wang, Zhenhong Zhou, Kun Wang, Jie Zhang, Li Sun, Yang Liu, Sen Su | Published: 2025-12-02 Indirect Prompt InjectionCybersecurityBackdoor Attack 2025.12.02 2025.12.04 Literature Database
Improving Phishing Resilience with AI-Generated Training: Evidence on Prompting, Personalization, and Duration Authors: Francesco Greco, Giuseppe Desolda, Cesare Tucci, Andrea Esposito, Antonio Curci, Antonio Piccinno | Published: 2025-12-01 Indirect Prompt InjectionCybersecurityTraining Method 2025.12.01 2025.12.03 Literature Database
Securing Large Language Models (LLMs) from Prompt Injection Attacks Authors: Omar Farooq Khan Suri, John McCrae | Published: 2025-12-01 Indirect Prompt InjectionCybersecurityEffectiveness Analysis of Defense Methods 2025.12.01 2025.12.03 Literature Database
Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion Authors: Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang | Published: 2025-11-24 Indirect Prompt InjectionPrompt InjectionRisk Assessment Method 2025.11.24 2025.11.26 Literature Database
RoguePrompt: Dual-Layer Ciphering for Self-Reconstruction to Circumvent LLM Moderation Authors: Benyamin Tafreshian | Published: 2025-11-24 Indirect Prompt InjectionPrompt leakingMalicious Prompt 2025.11.24 2025.11.26 Literature Database
Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks Authors: Zimo Ji, Xunguang Wang, Zongjie Li, Pingchuan Ma, Yudong Gao, Daoyuan Wu, Xincheng Yan, Tian Tian, Shuai Wang | Published: 2025-11-19 Indirect Prompt InjectionPrompt leakingAdaptive Misuse Detection 2025.11.19 2025.11.21 Literature Database
Large Language Models for Cyber Security Authors: Raunak Somani, Aswani Kumar Cherukuri | Published: 2025-11-06 Poisoning attack on RAGIndirect Prompt InjectionInformation Security 2025.11.06 2025.11.08 Literature Database
Death by a Thousand Prompts: Open Model Vulnerability Analysis Authors: Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan, Adam Swanda | Published: 2025-11-05 Disabling Safety Mechanisms of LLMIndirect Prompt InjectionThreat modeling 2025.11.05 2025.11.07 Literature Database
Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels Authors: Chenghao Du, Quanfeng Huang, Tingxuan Tang, Zihao Wang, Adwait Nadkarni, Yue Xiao | Published: 2025-10-31 | Updated: 2025-11-06 Indirect Prompt InjectionPrompt InjectionInformation Security 2025.10.31 2025.11.08 Literature Database
Securing AI Agent Execution Authors: Christoph Bühler, Matteo Biagiola, Luca Di Grazia, Guido Salvaneschi | Published: 2025-10-24 | Updated: 2025-10-29 Indirect Prompt InjectionModel Extraction AttackDynamic Access Control 2025.10.24 2025.10.31 Literature Database