RoguePrompt: Dual-Layer Ciphering for Self-Reconstruction to Circumvent LLM Moderation Authors: Benyamin Tafreshian | Published: 2025-11-24 Indirect Prompt InjectionPrompt leakingMalicious Prompt 2025.11.24 2025.11.26 Literature Database
Q-MLLM: Vector Quantization for Robust Multimodal Large Language Model Security Authors: Wei Zhao, Zhe Li, Yige Li, Jun Sun | Published: 2025-11-20 Prompt leakingRobustness Improvement MethodDigital Watermarking for Generative AI 2025.11.20 2025.11.22 Literature Database
PSM: Prompt Sensitivity Minimization via LLM-Guided Black-Box Optimization Authors: Huseein Jawad, Nicolas Brunel | Published: 2025-11-20 Privacy-Preserving Data MiningPrompt leakingMalicious Prompt 2025.11.20 2025.11.22 Literature Database
Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks Authors: Zimo Ji, Xunguang Wang, Zongjie Li, Pingchuan Ma, Yudong Gao, Daoyuan Wu, Xincheng Yan, Tian Tian, Shuai Wang | Published: 2025-11-19 Indirect Prompt InjectionPrompt leakingAdaptive Misuse Detection 2025.11.19 2025.11.21 Literature Database
TZ-LLM: Protecting On-Device Large Language Models with Arm TrustZone Authors: Xunjie Wang, Jiacheng Shi, Zihan Zhao, Yang Yu, Zhichao Hua, Jinyu Gu | Published: 2025-11-17 Prompt leakingModel DoSPerformance Evaluation Metrics 2025.11.17 2025.11.19 Literature Database
Black-Box Guardrail Reverse-engineering Attack Authors: Hongwei Yao, Yun Xia, Shuo Shao, Haoran Shi, Tong Qiao, Cong Wang | Published: 2025-11-06 Disabling Safety Mechanisms of LLMPrompt leakingInformation Security 2025.11.06 2025.11.08 Literature Database
Whisper Leak: a side-channel attack on Large Language Models Authors: Geoff McDonald, Jonathan Bar Or | Published: 2025-11-05 Traffic Characteristic AnalysisPrompt leakingLarge Language Model 2025.11.05 2025.11.07 Literature Database
Fast-MIA: Efficient and Scalable Membership Inference for LLMs Authors: Hiromu Takahashi, Shotaro Ishihara | Published: 2025-10-27 Privacy Protection MethodPrompt leakingComputational Efficiency 2025.10.27 2025.10.29 Literature Database
Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies Authors: Bin Wang, YiLu Zhong, MiDi Wan, WenJie Yu, YuanBing Ouyang, Yenan Huang, Hui Li | Published: 2025-10-27 Software SecurityPrompt InjectionPrompt leaking 2025.10.27 2025.10.29 Literature Database
CircuitGuard: Mitigating LLM Memorization in RTL Code Generation Against IP Leakage Authors: Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar | Published: 2025-10-22 Privacy-Preserving Machine LearningPrompt leakingCauses of Information Leakage 2025.10.22 2025.10.24 Literature Database