ChatNVD: Advancing Cybersecurity Vulnerability Assessment with Large Language Models Authors: Shivansh Chopra, Hussain Ahmad, Diksha Goel, Claudia Szabo | Published: 2024-12-06 | Updated: 2025-05-20 Text Generation MethodPrompt InjectionComputational Efficiency 2024.12.06 2025.05.28 Literature Database
VLSBench: Unveiling Visual Leakage in Multimodal Safety Authors: Xuhao Hu, Dongrui Liu, Hao Li, Xuanjing Huang, Jing Shao | Published: 2024-11-29 | Updated: 2025-01-17 Prompt InjectionSafety Alignment 2024.11.29 2025.05.27 Literature Database
Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment Authors: Soumya Suvra Ghosal, Souradip Chakraborty, Vaibhav Singh, Tianrui Guan, Mengdi Wang, Ahmad Beirami, Furong Huang, Alvaro Velasquez, Dinesh Manocha, Amrit Singh Bedi | Published: 2024-11-27 | Updated: 2025-03-20 Prompt InjectionSafety AlignmentAdversarial attack 2024.11.27 2025.05.27 Literature Database
“Moralized” Multi-Step Jailbreak Prompts: Black-Box Testing of Guardrails in Large Language Models for Verbal Attacks Authors: Libo Wang | Published: 2024-11-23 | Updated: 2025-03-20 Prompt InjectionLarge Language Model 2024.11.23 2025.05.27 Literature Database
JailbreakLens: Interpreting Jailbreak Mechanism in the Lens of Representation and Circuit Authors: Zeqing He, Zhibo Wang, Zhixuan Chu, Huiyu Xu, Wenhui Zhang, Qinglong Wang, Rui Zheng | Published: 2024-11-17 | Updated: 2025-04-24 Disabling Safety Mechanisms of LLMPrompt InjectionLarge Language Model 2024.11.17 2025.05.27 Literature Database
MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue Authors: Fengxiang Wang, Ranjie Duan, Peng Xiao, Xiaojun Jia, Shiji Zhao, Cheng Wei, YueFeng Chen, Chongwen Wang, Jialing Tao, Hang Su, Jun Zhu, Hui Xue | Published: 2024-11-06 | Updated: 2025-01-07 Prompt InjectionMulti-Round Dialogue 2024.11.06 2025.05.27 Literature Database
SQL Injection Jailbreak: A Structural Disaster of Large Language Models Authors: Jiawei Zhao, Kejiang Chen, Weiming Zhang, Nenghai Yu | Published: 2024-11-03 | Updated: 2025-05-21 Prompt InjectionPrompt leakingAttack Type 2024.11.03 2025.05.28 Literature Database
What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks Authors: Nathalie Kirch, Constantin Weisser, Severin Field, Helen Yannakoudakis, Stephen Casper | Published: 2024-11-02 | Updated: 2025-05-14 Disabling Safety Mechanisms of LLMPrompt InjectionExploratory Attack 2024.11.02 2025.05.28 Literature Database
Jailbreaking and Mitigation of Vulnerabilities in Large Language Models Authors: Benji Peng, Keyu Chen, Qian Niu, Ziqian Bi, Ming Liu, Pohsun Feng, Tianyang Wang, Lawrence K. Q. Yan, Yizhu Wen, Yichao Zhang, Caitlyn Heqi Yin | Published: 2024-10-20 | Updated: 2025-05-08 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2024.10.20 2025.05.27 Literature Database
Denial-of-Service Poisoning Attacks against Large Language Models Authors: Kuofeng Gao, Tianyu Pang, Chao Du, Yong Yang, Shu-Tao Xia, Min Lin | Published: 2024-10-14 Prompt InjectionModel DoSResource Scarcity Issues 2024.10.14 2025.05.27 Literature Database