A Qualitative Study on Using ChatGPT for Software Security: Perception vs. Practicality Authors: M. Mehdi Kholoosi, M. Ali Babar, Roland Croft | Published: 2024-08-01 Security AnalysisPrompt InjectionVulnerability Management 2024.08.01 2025.05.12 Literature Database
From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks Authors: Aditya Kulkarni, Vivek Balachandran, Dinil Mon Divakaran, Tamal Das | Published: 2024-07-29 | Updated: 2025-03-15 Dataset GenerationPhishing DetectionPrompt Injection 2024.07.29 2025.05.12 Literature Database
Private prediction for large-scale synthetic text generation Authors: Kareem Amin, Alex Bie, Weiwei Kong, Alexey Kurakin, Natalia Ponomareva, Umar Syed, Andreas Terzis, Sergei Vassilvitskii | Published: 2024-07-16 | Updated: 2024-10-09 WatermarkingPrivacy Protection MethodPrompt Injection 2024.07.16 2025.05.12 Literature Database
TPIA: Towards Target-specific Prompt Injection Attack against Code-oriented Large Language Models Authors: Yuchen Yang, Hongwei Yao, Bingrun Yang, Yiling He, Yiming Li, Tianwei Zhang, Zhan Qin, Kui Ren, Chun Chen | Published: 2024-07-12 | Updated: 2025-01-16 LLM SecurityPrompt InjectionAttack Method 2024.07.12 2025.05.12 Literature Database
Refusing Safe Prompts for Multi-modal Large Language Models Authors: Zedian Shao, Hongbin Liu, Yuepeng Hu, Neil Zhenqiang Gong | Published: 2024-07-12 | Updated: 2024-09-05 LLM SecurityPrompt InjectionEvaluation Method 2024.07.12 2025.05.12 Literature Database
From Theft to Bomb-Making: The Ripple Effect of Unlearning in Defending Against Jailbreak Attacks Authors: Zhexin Zhang, Junxiao Yang, Yida Lu, Pei Ke, Shiyao Cui, Chujie Zheng, Hongning Wang, Minlie Huang | Published: 2024-07-03 | Updated: 2025-05-20 Prompt InjectionLarge Language Model法執行回避 2024.07.03 2025.05.22 Literature Database
On Discrete Prompt Optimization for Diffusion Models Authors: Ruochen Wang, Ting Liu, Cho-Jui Hsieh, Boqing Gong | Published: 2024-06-27 WatermarkingPrompt InjectionPrompt Engineering 2024.06.27 2025.05.12 Literature Database
CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models Authors: Yuetai Li, Zhangchen Xu, Fengqing Jiang, Luyao Niu, Dinuka Sahabandu, Bhaskar Ramasubramanian, Radha Poovendran | Published: 2024-06-18 | Updated: 2025-03-27 LLM SecurityBackdoor AttackPrompt Injection 2024.06.18 2025.05.12 Literature Database
ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates Authors: Fengqing Jiang, Zhangchen Xu, Luyao Niu, Bill Yuchen Lin, Radha Poovendran | Published: 2024-06-17 | Updated: 2025-01-07 LLM SecurityPrompt InjectionVulnerability Management 2024.06.17 2025.05.12 Literature Database
GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory Authors: Wei Fan, Haoran Li, Zheye Deng, Weiqi Wang, Yangqiu Song | Published: 2024-06-17 | Updated: 2024-10-04 LLM Performance EvaluationPrivacy Protection MethodPrompt Injection 2024.06.17 2025.05.12 Literature Database