CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion Authors: Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, Lizhuang Ma | Published: 2024-03-12 | Updated: 2024-09-14 LLM SecurityCode GenerationPrompt Injection 2024.03.12 2025.05.12 Literature Database
Fuzzing BusyBox: Leveraging LLM and Crash Reuse for Embedded Bug Unearthing Authors: Asmita, Yaroslav Oliinyk, Michael Scott, Ryan Tsang, Chongzhou Fang, Houman Homayoun | Published: 2024-03-06 LLM SecurityFuzzingInitial Seed Generation 2024.03.06 2025.05.12 Literature Database
AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks Authors: Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li | Published: 2024-03-02 LLM SecurityPrompt InjectionAttack Method 2024.03.02 2025.05.12 Literature Database
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction Authors: Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen | Published: 2024-02-28 | Updated: 2024-06-10 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.28 2025.05.12 Literature Database
LLMs Can Defend Themselves Against Jailbreaking in a Practical Manner: A Vision Paper Authors: Daoyuan Wu, Shuai Wang, Yang Liu, Ning Liu | Published: 2024-02-24 | Updated: 2024-03-04 LLM SecurityPrompt InjectionPrompt Engineering 2024.02.24 2025.05.12 Literature Database
On Trojan Signatures in Large Language Models of Code Authors: Aftab Hussain, Md Rafiqul Islam Rabin, Mohammad Amin Alipour | Published: 2024-02-23 | Updated: 2024-03-07 LLM SecurityTrojan Horse SignatureTrojan Detection 2024.02.23 2025.05.12 Literature Database
Coercing LLMs to do and reveal (almost) anything Authors: Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein | Published: 2024-02-21 LLM SecurityPrompt InjectionAttack Method 2024.02.21 2025.05.12 Literature Database
A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models Authors: Zihao Xu, Yi Liu, Gelei Deng, Yuekang Li, Stjepan Picek | Published: 2024-02-21 | Updated: 2024-05-17 LLM SecurityPrompt InjectionDefense Method 2024.02.21 2025.05.12 Literature Database
The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative Authors: Zhen Tan, Chengshuai Zhao, Raha Moraffah, Yifan Li, Yu Kong, Tianlong Chen, Huan Liu | Published: 2024-02-20 | Updated: 2024-06-03 LLM SecurityClassification of Malicious ActorsAttack Method 2024.02.20 2025.05.12 Literature Database
TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification Authors: Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh | Published: 2024-02-20 | Updated: 2024-06-06 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.20 2025.05.12 Literature Database