CTINexus: Automatic Cyber Threat Intelligence Knowledge Graph Construction Using Large Language Models Authors: Yutong Cheng, Osama Bajaber, Saimon Amanuel Tsegai, Dawn Song, Peng Gao | Published: 2024-10-28 | Updated: 2025-04-21 Cyber Threat IntelligencePrompt leakingWatermarking Technology 2024.10.28 2025.05.27 Literature Database
Reconstruction of Differentially Private Text Sanitization via Large Language Models Authors: Shuchao Pang, Zhigang Lu, Haichen Wang, Peng Fu, Yongbin Zhou, Minhui Xue | Published: 2024-10-16 | Updated: 2025-09-18 Privacy AnalysisPrompt InjectionPrompt leaking 2024.10.16 2025.09.20 Literature Database
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems Authors: Linke Song, Zixuan Pang, Wenhao Wang, Zihao Wang, XiaoFeng Wang, Hongbo Chen, Wei Song, Yier Jin, Dan Meng, Rui Hou | Published: 2024-09-30 | Updated: 2025-08-13 Security AssurancePrompt leaking攻撃戦略分析 2024.09.30 2025.08.15 Literature Database
Confidential Prompting: Privacy-preserving LLM Inference on Cloud Authors: Caihua Li, In Gim, Lin Zhong | Published: 2024-09-27 | Updated: 2025-08-25 Process Partitioning MethodPrompt leakingModel Extraction Attack 2024.09.27 2025.08.27 Literature Database
Evading Toxicity Detection with ASCII-art: A Benchmark of Spatial Attacks on Moderation Systems Authors: Sergey Berezin, Reza Farahbakhsh, Noel Crespi | Published: 2024-09-27 | Updated: 2025-09-24 Token Compression FrameworkPrompt leakingNatural Language Processing 2024.09.27 2025.09.26 Literature Database
Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Haoyang Li | Published: 2024-08-05 | Updated: 2025-02-12 Prompt InjectionPrompt leakingModel Evaluation 2024.08.05 2025.05.27 Literature Database
From Sands to Mansions: Towards Automated Cyberattack Emulation with Classical Planning and Large Language Models Authors: Lingzhi Wang, Zhenyuan Li, Yi Jiang, Zhengkai Wang, Zonghan Guo, Jiahui Wang, Yangyang Wei, Xiangmin Shen, Wei Ruan, Yan Chen | Published: 2024-07-24 | Updated: 2025-04-17 Prompt leakingAttack Action ModelAttack Detection Method 2024.07.24 2025.05.27 Literature Database
ProxyGPT: Enabling User Anonymity in LLM Chatbots via (Un)Trustworthy Volunteer Proxies Authors: Dzung Pham, Jade Sheffey, Chau Minh Pham, Amir Houmansadr | Published: 2024-07-11 | Updated: 2025-06-11 Privacy Enhancing TechnologyPrompt InjectionPrompt leaking 2024.07.11 2025.06.13 Literature Database
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models Authors: Xi Li, Ruofan Mao, Yusen Zhang, Renze Lou, Chen Wu, Jiaqi Wang | Published: 2024-06-10 | Updated: 2025-10-30 Indirect Prompt InjectionTrigger DetectionPrompt leaking 2024.06.10 2025.11.01 Literature Database
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications Authors: Quan Zhang, Binqi Zeng, Chijin Zhou, Gwihwan Go, Heyuan Shi, Yu Jiang | Published: 2024-04-26 Poisoning attack on RAGPrompt leakingPoisoning 2024.04.26 2025.05.27 Literature Database