Large Language Models for Cyber Security: A Systematic Literature Review Authors: Hanxiang Xu, Shenao Wang, Ningke Li, Kailong Wang, Yanjie Zhao, Kai Chen, Ting Yu, Yang Liu, Haoyu Wang | Published: 2024-05-08 | Updated: 2025-05-15 LLM SecurityIndirect Prompt Injection文献レビュー 2024.05.08 2025.05.28 Literature Database
Defending Against Indirect Prompt Injection Attacks With Spotlighting Authors: Keegan Hines, Gary Lopez, Matthew Hall, Federico Zarfati, Yonatan Zunger, Emre Kiciman | Published: 2024-03-20 Indirect Prompt InjectionPrompt InjectionMalicious Prompt 2024.03.20 2025.05.27 Literature Database
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents Authors: Qiusi Zhan, Zhixiang Liang, Zifan Ying, Daniel Kang | Published: 2024-03-05 | Updated: 2024-08-04 Indirect Prompt InjectionTaxonomy of AttacksVulnerability Analysis 2024.03.05 2025.05.27 Literature Database
Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models Authors: Jingwei Yi, Yueqi Xie, Bin Zhu, Emre Kiciman, Guangzhong Sun, Xing Xie, Fangzhao Wu | Published: 2023-12-21 | Updated: 2025-01-27 Indirect Prompt InjectionMalicious PromptVulnerability Analysis 2023.12.21 2025.05.27 Literature Database
Abusing Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs Authors: Eugene Bagdasaryan, Tsung-Yin Hsieh, Ben Nassi, Vitaly Shmatikov | Published: 2023-07-19 | Updated: 2023-10-03 Indirect Prompt InjectionMalicious PromptAdversarial Example 2023.07.19 2025.05.28 Literature Database
Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection Authors: Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, Mario Fritz | Published: 2023-02-23 | Updated: 2023-05-05 Indirect Prompt InjectionPrompt InjectionMalicious Prompt 2023.02.23 2025.05.28 Literature Database