Toxicity Detection for Free Authors: Zhanhao Hu, Julien Piet, Geng Zhao, Jiantao Jiao, David Wagner | Published: 2024-05-29 | Updated: 2024-11-08 Indirect Prompt InjectionPrompt validationMalicious Prompt 2024.05.29 2025.05.12 Literature Database
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation Authors: Jiangnan Li, Yingyuan Yang, Jinyuan Sun | Published: 2024-05-10 | Updated: 2025-04-21 LLM Performance EvaluationIndirect Prompt InjectionAttack Detection 2024.05.10 2025.05.12 Literature Database
Large Language Models for Cyber Security: A Systematic Literature Review Authors: Hanxiang Xu, Shenao Wang, Ningke Li, Kailong Wang, Yanjie Zhao, Kai Chen, Ting Yu, Yang Liu, Haoyu Wang | Published: 2024-05-08 | Updated: 2025-05-15 LLM SecurityIndirect Prompt Injection文献レビュー 2024.05.08 2025.05.17 Literature Database
Defending Against Indirect Prompt Injection Attacks With Spotlighting Authors: Keegan Hines, Gary Lopez, Matthew Hall, Federico Zarfati, Yonatan Zunger, Emre Kiciman | Published: 2024-03-20 Indirect Prompt InjectionPrompt InjectionMalicious Prompt 2024.03.20 2025.05.12 Literature Database
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents Authors: Qiusi Zhan, Zhixiang Liang, Zifan Ying, Daniel Kang | Published: 2024-03-05 | Updated: 2024-08-04 Indirect Prompt InjectionTaxonomy of AttacksVulnerability Analysis 2024.03.05 2025.05.12 Literature Database
Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models Authors: Jingwei Yi, Yueqi Xie, Bin Zhu, Emre Kiciman, Guangzhong Sun, Xing Xie, Fangzhao Wu | Published: 2023-12-21 | Updated: 2025-01-27 Indirect Prompt InjectionMalicious PromptVulnerability Analysis 2023.12.21 2025.05.12 Literature Database
Abusing Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs Authors: Eugene Bagdasaryan, Tsung-Yin Hsieh, Ben Nassi, Vitaly Shmatikov | Published: 2023-07-19 | Updated: 2023-10-03 Indirect Prompt InjectionMalicious PromptAdversarial Example 2023.07.19 2025.05.12 Literature Database
Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection Authors: Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, Mario Fritz | Published: 2023-02-23 | Updated: 2023-05-05 Indirect Prompt InjectionPrompt InjectionMalicious Prompt 2023.02.23 2025.05.12 Literature Database