Prompt leaking

Black-Box Guardrail Reverse-engineering Attack

Authors: Hongwei Yao, Yun Xia, Shuo Shao, Haoran Shi, Tong Qiao, Cong Wang | Published: 2025-11-06
Disabling Safety Mechanisms of LLM
Prompt leaking
Information Security

Whisper Leak: a side-channel attack on Large Language Models

Authors: Geoff McDonald, Jonathan Bar Or | Published: 2025-11-05
Traffic Characteristic Analysis
Prompt leaking
Large Language Model

Fast-MIA: Efficient and Scalable Membership Inference for LLMs

Authors: Hiromu Takahashi, Shotaro Ishihara | Published: 2025-10-27
Privacy Protection Method
Prompt leaking
Computational Efficiency

Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies

Authors: Bin Wang, YiLu Zhong, MiDi Wan, WenJie Yu, YuanBing Ouyang, Yenan Huang, Hui Li | Published: 2025-10-27
Software Security
Prompt Injection
Prompt leaking

CircuitGuard: Mitigating LLM Memorization in RTL Code Generation Against IP Leakage

Authors: Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar | Published: 2025-10-22
Privacy-Preserving Machine Learning
Prompt leaking
Causes of Information Leakage

Exploring Membership Inference Vulnerabilities in Clinical Large Language Models

Authors: Alexander Nemecek, Zebin Yun, Zahra Rahmani, Yaniv Harel, Vipin Chaudhary, Mahmood Sharif, Erman Ayday | Published: 2025-10-21
Privacy-Preserving Machine Learning
Prompt leaking
Threats of Medical AI

Prompting the Priorities: A First Look at Evaluating LLMs for Vulnerability Triage and Prioritization

Authors: Osama Al Haddad, Muhammad Ikram, Ejaz Ahmed, Young Lee | Published: 2025-10-21
Prompt Injection
Prompt leaking
脆弱性優先順位付け

RESCUE: Retrieval Augmented Secure Code Generation

Authors: Jiahao Shi, Tianyi Zhang | Published: 2025-10-21
Poisoning attack on RAG
Data-Driven Vulnerability Assessment
Prompt leaking

Lexo: Eliminating Stealthy Supply-Chain Attacks via LLM-Assisted Program Regeneration

Authors: Evangelos Lamprou, Julian Dai, Grigoris Ntousakis, Martin C. Rinard, Nikos Vasilakis | Published: 2025-10-16
Security Analysis
Program Verification
Prompt leaking

Are My Optimized Prompts Compromised? Exploring Vulnerabilities of LLM-based Optimizers

Authors: Andrew Zhao, Reshmi Ghosh, Vitor Carvalho, Emily Lawton, Keegan Hines, Gao Huang, Jack W. Stokes | Published: 2025-10-16
Prompt Injection
Prompt leaking
Large Language Model