This page provides the attacks and factors that have a negative impact “Leakage of LLM system prompts” in the information systems aspect in the AI Security Map, the defense methods and countermeasures against them, as well as the relevant AI technologies, tasks, and data. It also indicates related elements in the external influence aspect.
Attack or cause
Defensive method or countermeasure
Targeted AI technology
- LLM
Task
- Generation
Data
- Text
Related external influence aspect
References
Prompt leaking
- Effective Prompt Extraction from Language Models, 2023
- PLeak: Prompt Leaking Attacks against Large Language Model Applications, 2024
- What Was Your Prompt? A Remote Keylogging Attack on AI Assistants, 2024
- Prompt Stealing Attacks Against Large Language Models, 2024
- Assessing Prompt Injection Risks in 200+ Custom GPTs, 2024
- PRSA: PRompt Stealing Attacks against Large Language Models, 2024
- Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models, 2025