Towards Explainable Network Intrusion Detection using Large Language Models Authors: Paul R. B. Houssel, Priyanka Singh, Siamak Layeghy, Marius Portmann | Published: 2024-08-08 LLM Performance EvaluationNetwork Threat DetectionPrompt Injection 2024.08.08 2025.05.27 Literature Database
EnJa: Ensemble Jailbreak on Large Language Models Authors: Jiahao Zhang, Zilong Wang, Ruofan Wang, Xingjun Ma, Yu-Gang Jiang | Published: 2024-08-07 Prompt InjectionAttack MethodEvaluation Method 2024.08.07 2025.05.27 Literature Database
Compromising Embodied Agents with Contextual Backdoor Attacks Authors: Aishan Liu, Yuguang Zhou, Xianglong Liu, Tianyuan Zhang, Siyuan Liang, Jiakai Wang, Yanjun Pu, Tianlin Li, Junqi Zhang, Wenbo Zhou, Qing Guo, Dacheng Tao | Published: 2024-08-06 Backdoor AttackPrompt Injection 2024.08.06 2025.05.27 Literature Database
Hide and Seek: Fingerprinting Large Language Models with Evolutionary Learning Authors: Dmitri Iourovitski, Sanat Sharma, Rakshak Talwar | Published: 2024-08-06 LLM Performance EvaluationPrompt InjectionModel Performance Evaluation 2024.08.06 2025.05.27 Literature Database
Can Reinforcement Learning Unlock the Hidden Dangers in Aligned Large Language Models? Authors: Mohammad Bahrami Karkevandi, Nishant Vishwamitra, Peyman Najafirad | Published: 2024-08-05 Prompt InjectionReinforcement LearningAdversarial Example 2024.08.05 2025.05.27 Literature Database
Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Haoyang Li | Published: 2024-08-05 | Updated: 2025-02-12 Prompt InjectionPrompt leakingModel Evaluation 2024.08.05 2025.05.27 Literature Database
Automated Phishing Detection Using URLs and Webpages Authors: Huilin Wang, Bryan Hooi | Published: 2024-08-03 | Updated: 2024-08-16 Phishing DetectionBrand Recognition ProblemPrompt Injection 2024.08.03 2025.05.27 Literature Database
MCGMark: An Encodable and Robust Online Watermark for Tracing LLM-Generated Malicious Code Authors: Kaiwen Ning, Jiachi Chen, Qingyuan Zhong, Tao Zhang, Yanlin Wang, Wei Li, Jingwen Zhang, Jianxing Yu, Yuming Feng, Weizhe Zhang, Zibin Zheng | Published: 2024-08-02 | Updated: 2025-04-21 Code GenerationPrompt InjectionWatermark Robustness 2024.08.02 2025.05.27 Literature Database
Jailbreaking Text-to-Image Models with LLM-Based Agents Authors: Yingkai Dong, Zheng Li, Xiangtao Meng, Ning Yu, Shanqing Guo | Published: 2024-08-01 | Updated: 2024-09-09 LLM SecurityPrompt InjectionModel Performance Evaluation 2024.08.01 2025.05.27 Literature Database
A Qualitative Study on Using ChatGPT for Software Security: Perception vs. Practicality Authors: M. Mehdi Kholoosi, M. Ali Babar, Roland Croft | Published: 2024-08-01 Security AnalysisPrompt InjectionVulnerability Management 2024.08.01 2025.05.27 Literature Database