Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks Authors: Chen Xiong, Xiangyu Qi, Pin-Yu Chen, Tsung-Yi Ho | Published: 2024-05-30 | Updated: 2025-06-04 DPPセット生成Prompt InjectionAttack Method 2024.05.30 2025.06.06 Literature Database
Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems Authors: Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, Yixuan Wang, Qi Alfred Chen, Qi Zhu | Published: 2024-05-27 | Updated: 2025-04-30 LLM SecurityBackdoor AttackPrompt Injection 2024.05.27 2025.05.27 Literature Database
Medical MLLM is Vulnerable: Cross-Modality Jailbreak and Mismatched Attacks on Medical Multimodal Large Language Models Authors: Xijie Huang, Xinyuan Wang, Hantao Zhang, Yinghao Zhu, Jiawen Xi, Jingkun An, Hao Wang, Hao Liang, Chengwei Pan | Published: 2024-05-26 | Updated: 2024-08-21 Prompt InjectionThreats of Medical AIAttack Method 2024.05.26 2025.05.27 Literature Database
Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character Authors: Siyuan Ma, Weidi Luo, Yu Wang, Xiaogeng Liu | Published: 2024-05-25 | Updated: 2024-06-12 LLM SecurityPrompt InjectionAttack Method 2024.05.25 2025.05.27 Literature Database
Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study Authors: Karl Tamberg, Hayretdin Bahsi | Published: 2024-05-24 LLM Performance EvaluationPrompt InjectionVulnerability Management 2024.05.24 2025.05.27 Literature Database
ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users Authors: Guanlin Li, Kangjie Chen, Shudong Zhang, Jie Zhang, Tianwei Zhang | Published: 2024-05-24 | Updated: 2024-10-11 Content ModerationPrompt InjectionCompliance with Ethical Guidelines 2024.05.24 2025.05.27 Literature Database
Cross-Task Defense: Instruction-Tuning LLMs for Content Safety Authors: Yu Fu, Wen Xiao, Jia Chen, Jiachen Li, Evangelos Papalexakis, Aichi Chien, Yue Dong | Published: 2024-05-24 Content ModerationPrompt InjectionDefense Method 2024.05.24 2025.05.27 Literature Database
A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions Authors: Mohammed Hassanin, Nour Moustafa | Published: 2024-05-23 LLM SecurityCybersecurityPrompt Injection 2024.05.23 2025.05.27 Literature Database
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities Authors: Mohamed Amine Ferrag, Fatima Alwahedi, Ammar Battah, Bilel Cherif, Abdechakour Mechri, Norbert Tihanyi, Tamas Bisztray, Merouane Debbah | Published: 2024-05-21 | Updated: 2025-01-17 LLM Performance EvaluationCybersecurityPrompt Injection 2024.05.21 2025.05.27 Literature Database
Self-HWDebug: Automation of LLM Self-Instructing for Hardware Security Verification Authors: Mohammad Akyash, Hadi Mardani Kamali | Published: 2024-05-20 Security AnalysisPrompt InjectionVulnerability Management 2024.05.20 2025.05.27 Literature Database