Prompt leaking

From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security

Authors: Enna Basic, Alberto Giaretta | Published: 2024-12-19 | Updated: 2025-04-14
Prompt Injection
Prompt leaking
Vulnerability detection

ThreatModeling-LLM: Automating Threat Modeling using Large Language Models for Banking System

Authors: Tingmin Wu, Shuiqiao Yang, Shigang Liu, David Nguyen, Seung Jang, Alsharif Abuadbba | Published: 2024-11-26 | Updated: 2025-05-14
Bias Detection in AI Output
Prompt leaking
脅威モデリング自動化

PEEK: Phishing Evolution Framework for Phishing Generation and Evolving Pattern Analysis using Large Language Models

Authors: Fengchao Chen, Tingmin Wu, Van Nguyen, Shuo Wang, Alsharif Abuadbba, Carsten Rudolph | Published: 2024-11-18 | Updated: 2025-05-06
LLM Performance Evaluation
Prompt leaking
Promotion of Diversity

SQL Injection Jailbreak: A Structural Disaster of Large Language Models

Authors: Jiawei Zhao, Kejiang Chen, Weiming Zhang, Nenghai Yu | Published: 2024-11-03 | Updated: 2025-05-21
Prompt Injection
Prompt leaking
Attack Type

CTINexus: Automatic Cyber Threat Intelligence Knowledge Graph Construction Using Large Language Models

Authors: Yutong Cheng, Osama Bajaber, Saimon Amanuel Tsegai, Dawn Song, Peng Gao | Published: 2024-10-28 | Updated: 2025-04-21
Cyber Threat Intelligence
Prompt leaking
Watermarking Technology

Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models

Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Haoyang Li | Published: 2024-08-05 | Updated: 2025-02-12
Prompt Injection
Prompt leaking
Model Evaluation

From Sands to Mansions: Towards Automated Cyberattack Emulation with Classical Planning and Large Language Models

Authors: Lingzhi Wang, Zhenyuan Li, Yi Jiang, Zhengkai Wang, Zonghan Guo, Jiahui Wang, Yangyang Wei, Xiangmin Shen, Wei Ruan, Yan Chen | Published: 2024-07-24 | Updated: 2025-04-17
Prompt leaking
Attack Action Model
Attack Detection Method

Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications

Authors: Quan Zhang, Binqi Zeng, Chijin Zhou, Gwihwan Go, Heyuan Shi, Yu Jiang | Published: 2024-04-26
Poisoning attack on RAG
Prompt leaking
Poisoning

Stealing Part of a Production Language Model

Authors: Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Itay Yona, Eric Wallace, David Rolnick, Florian Tramèr | Published: 2024-03-11 | Updated: 2024-07-09
Prompt leaking
Model Robustness
Model Extraction Attack

Secret Collusion among Generative AI Agents: Multi-Agent Deception via Steganography

Authors: Sumeet Ramesh Motwani, Mikhail Baranchuk, Martin Strohmeier, Vijay Bolina, Philip H. S. Torr, Lewis Hammond, Christian Schroeder de Witt | Published: 2024-02-12 | Updated: 2025-04-14
Privacy Enhancing Technology
Prompt leaking
Digital Watermarking for Generative AI