Safeguarding LLMs Against Misuse and AI-Driven Malware Using Steganographic Canaries Authors: Md Raz, Venkata Sai Charan Putrevu, Meet Udeshi, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri | Published: 2026-03-30 Data LeakagePrompt leakingLarge Language Model 2026.03.30 2026.04.01 Literature Database
Unveiling the Resilience of LLM-Enhanced Search Engines against Black-Hat SEO Manipulation Authors: Pei Chen, Geng Hong, Xinyi Wu, Mengying Wu, Zixuan Zhu, Mingxuan Liu, Baojun Liu, Mi Zhang, Min Yang | Published: 2026-03-26 Prompt leakingModel Extraction AttackLarge Language Model 2026.03.26 2026.03.28 Literature Database
Beyond Content Safety: Real-Time Monitoring for Reasoning Vulnerabilities in Large Language Models Authors: Xunguang Wang, Yuguang Zhou, Qingyue Wang, Zongjie Li, Ruixuan Huang, Zhenlan Ji, Pingchuan Ma, Shuai Wang | Published: 2026-03-26 Indirect Prompt InjectionPrompt leakingLarge Language Model 2026.03.26 2026.03.28 Literature Database
How Vulnerable Are Edge LLMs? Authors: Ao Ding, Hongzong Li, Zi Liang, Zhanpeng Shi, Shuxin Zhuang, Shiqin Tang, Rong Feng, Ping Lu | Published: 2026-03-25 Indirect Prompt InjectionData GenerationPrompt leaking 2026.03.25 2026.03.26 Literature Database
Leveraging Large Language Models for Trustworthiness Assessment of Web Applications Authors: Oleksandr Yarotskyi, José D'Abruzzo Pereira, João R. Campos | Published: 2026-03-24 セキュアコーディングPrompt leakingEvaluation Method 2026.03.24 2026.03.26 Literature Database
Functional Subspace Watermarking for Large Language Models Authors: Zikang Ding, Junhao Li, Suling Wu, Junchi Yao, Hongbo Liu, Lijie Hu | Published: 2026-03-19 WatermarkingPrompt leakingMembership Inference 2026.03.19 2026.03.25 Literature Database
Understanding LLM Behavior When Encountering User-Supplied Harmful Content in Harmless Tasks Authors: Junjie Chu, Yiting Qu, Ye Leng, Michael Backes, Yun Shen, Savvas Zannettou, Yang Zhang | Published: 2026-03-12 Prompt InjectionPrompt leakingRisk Assessment 2026.03.12 2026.03.14 Literature Database
CacheSolidarity: Preventing Prefix Caching Side Channels in Multi-tenant LLM Serving Systems Authors: Panagiotis Georgios Pennas, Konstantinos Papaioannou, Marco Guarnieri, Thaleia Dimitra Doudali | Published: 2026-03-11 LLM Performance EvaluationPrompt InjectionPrompt leaking 2026.03.11 2026.03.13 Literature Database
Measuring Privacy vs. Fidelity in Synthetic Social Media Datasets Authors: Henry Tari, Adriana Iamnitchi | Published: 2026-03-04 LLM Performance EvaluationData Privacy ManagementPrompt leaking 2026.03.04 2026.03.06 Literature Database
Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision Authors: Manisha Mukherjee, Vincent J. Hellendoorn | Published: 2026-03-02 Indirect Prompt Injectionセキュリティに関連する知識を活用した手法Prompt leaking 2026.03.02 2026.03.04 Literature Database