Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Haoyang Li | Published: 2024-08-05 | Updated: 2025-02-12 Prompt InjectionPrompt leakingModel Evaluation 2024.08.05 2025.05.27 Literature Database
Automated Phishing Detection Using URLs and Webpages Authors: Huilin Wang, Bryan Hooi | Published: 2024-08-03 | Updated: 2024-08-16 Phishing DetectionBrand Recognition ProblemPrompt Injection 2024.08.03 2025.05.27 Literature Database
MCGMark: An Encodable and Robust Online Watermark for Tracing LLM-Generated Malicious Code Authors: Kaiwen Ning, Jiachi Chen, Qingyuan Zhong, Tao Zhang, Yanlin Wang, Wei Li, Jingwen Zhang, Jianxing Yu, Yuming Feng, Weizhe Zhang, Zibin Zheng | Published: 2024-08-02 | Updated: 2025-04-21 Code GenerationPrompt InjectionWatermark Robustness 2024.08.02 2025.05.27 Literature Database
Jailbreaking Text-to-Image Models with LLM-Based Agents Authors: Yingkai Dong, Zheng Li, Xiangtao Meng, Ning Yu, Shanqing Guo | Published: 2024-08-01 | Updated: 2024-09-09 LLM SecurityPrompt InjectionModel Performance Evaluation 2024.08.01 2025.05.27 Literature Database
A Qualitative Study on Using ChatGPT for Software Security: Perception vs. Practicality Authors: M. Mehdi Kholoosi, M. Ali Babar, Roland Croft | Published: 2024-08-01 Security AnalysisPrompt InjectionVulnerability Management 2024.08.01 2025.05.27 Literature Database
From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks Authors: Aditya Kulkarni, Vivek Balachandran, Dinil Mon Divakaran, Tamal Das | Published: 2024-07-29 | Updated: 2025-03-15 Dataset GenerationPhishing DetectionPrompt Injection 2024.07.29 2025.05.27 Literature Database
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) Authors: Apurv Verma, Satyapriya Krishna, Sebastian Gehrmann, Madhavan Seshadri, Anu Pradhan, Tom Ault, Leslie Barrett, David Rabinowitz, John Doucette, NhatHai Phan | Published: 2024-07-20 | Updated: 2025-07-10 Prompt InjectionPrompt validationAdversarial attack 2024.07.20 2025.07.12 Literature Database
Private prediction for large-scale synthetic text generation Authors: Kareem Amin, Alex Bie, Weiwei Kong, Alexey Kurakin, Natalia Ponomareva, Umar Syed, Andreas Terzis, Sergei Vassilvitskii | Published: 2024-07-16 | Updated: 2024-10-09 WatermarkingPrivacy Protection MethodPrompt Injection 2024.07.16 2025.05.27 Literature Database
Hey, That’s My Model! Introducing Chain & Hash, An LLM Fingerprinting Technique Authors: Mark Russinovich, Ahmed Salem | Published: 2024-07-15 | Updated: 2025-06-12 Indirect Prompt InjectionFingerprinting MethodPrompt Injection 2024.07.15 2025.06.14 Literature Database
TPIA: Towards Target-specific Prompt Injection Attack against Code-oriented Large Language Models Authors: Yuchen Yang, Hongwei Yao, Bingrun Yang, Yiling He, Yiming Li, Tianwei Zhang, Zhan Qin, Kui Ren, Chun Chen | Published: 2024-07-12 | Updated: 2025-01-16 LLM SecurityPrompt InjectionAttack Method 2024.07.12 2025.05.27 Literature Database