PRSA: PRompt Stealing Attacks against Large Language Models Authors: Yong Yang, Changjiang Li, Yi Jiang, Xi Chen, Haoyu Wang, Xuhong Zhang, Zonghui Wang, Shouling Ji | Published: 2024-02-29 | Updated: 2024-06-08 LLM Performance EvaluationPrompt InjectionPrompt Engineering 2024.02.29 2025.05.27 Literature Database
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction Authors: Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen | Published: 2024-02-28 | Updated: 2024-06-10 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.28 2025.05.27 Literature Database
TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification Authors: Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh | Published: 2024-02-20 | Updated: 2024-06-06 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.20 2025.05.27 Literature Database
An Empirical Evaluation of LLMs for Solving Offensive Security Challenges Authors: Minghao Shao, Boyuan Chen, Sofija Jancheska, Brendan Dolan-Gavitt, Siddharth Garg, Ramesh Karri, Muhammad Shafique | Published: 2024-02-19 LLM Performance EvaluationPrompt InjectionEducational CTF 2024.02.19 2025.05.27 Literature Database
CyberMetric: A Benchmark Dataset based on Retrieval-Augmented Generation for Evaluating LLMs in Cybersecurity Knowledge Authors: Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Tamas Bisztray, Merouane Debbah | Published: 2024-02-12 | Updated: 2024-06-03 LLM Performance EvaluationCybersecurityDataset Generation 2024.02.12 2025.05.27 Literature Database
Differentially Private Training of Mixture of Experts Models Authors: Pierre Tholoniat, Huseyin A. Inan, Janardhan Kulkarni, Robert Sim | Published: 2024-02-11 LLM Performance EvaluationPrivacy Protection MethodModel Performance Evaluation 2024.02.11 2025.05.27 Literature Database
In-Context Learning Can Re-learn Forbidden Tasks Authors: Sophie Xhonneux, David Dobre, Jian Tang, Gauthier Gidel, Dhanya Sridhar | Published: 2024-02-08 Few-Shot LearningLLM SecurityLLM Performance Evaluation 2024.02.08 2025.05.27 Literature Database
Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia Authors: Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang | Published: 2024-02-08 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.08 2025.05.27 Literature Database
SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models Authors: Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, Jing Shao | Published: 2024-02-07 | Updated: 2024-06-07 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.07 2025.05.27 Literature Database
Ocassionally Secure: A Comparative Analysis of Code Generation Assistants Authors: Ran Elgedawy, John Sadik, Senjuti Dutta, Anuj Gautam, Konstantinos Georgiou, Farzin Gholamrezae, Fujiao Ji, Kyungchan Lim, Qian Liu, Scott Ruoti | Published: 2024-02-01 LLM Performance EvaluationCode GenerationPrompt Injection 2024.02.01 2025.05.27 Literature Database