Prompt Injection

Exploring Vulnerabilities and Protections in Large Language Models: A Survey

Authors: Frank Weizhen Liu, Chenhui Hu | Published: 2024-06-01
LLM Security
Prompt Injection
Defense Method

Improved Techniques for Optimization-Based Jailbreaking on Large Language Models

Authors: Xiaojun Jia, Tianyu Pang, Chao Du, Yihao Huang, Jindong Gu, Yang Liu, Xiaochun Cao, Min Lin | Published: 2024-05-31 | Updated: 2024-06-05
LLM Security
Watermarking
Prompt Injection

Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks

Authors: Chen Xiong, Xiangyu Qi, Pin-Yu Chen, Tsung-Yi Ho | Published: 2024-05-30 | Updated: 2025-06-04
DPPセット生成
Prompt Injection
Attack Method

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems

Authors: Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, Yixuan Wang, Qi Alfred Chen, Qi Zhu | Published: 2024-05-27 | Updated: 2025-04-30
LLM Security
Backdoor Attack
Prompt Injection

Medical MLLM is Vulnerable: Cross-Modality Jailbreak and Mismatched Attacks on Medical Multimodal Large Language Models

Authors: Xijie Huang, Xinyuan Wang, Hantao Zhang, Yinghao Zhu, Jiawen Xi, Jingkun An, Hao Wang, Hao Liang, Chengwei Pan | Published: 2024-05-26 | Updated: 2024-08-21
Prompt Injection
Threats of Medical AI
Attack Method

Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character

Authors: Siyuan Ma, Weidi Luo, Yu Wang, Xiaogeng Liu | Published: 2024-05-25 | Updated: 2024-06-12
LLM Security
Prompt Injection
Attack Method

Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study

Authors: Karl Tamberg, Hayretdin Bahsi | Published: 2024-05-24
LLM Performance Evaluation
Prompt Injection
Vulnerability Management

ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users

Authors: Guanlin Li, Kangjie Chen, Shudong Zhang, Jie Zhang, Tianwei Zhang | Published: 2024-05-24 | Updated: 2024-10-11
Content Moderation
Prompt Injection
Compliance with Ethical Guidelines

Cross-Task Defense: Instruction-Tuning LLMs for Content Safety

Authors: Yu Fu, Wen Xiao, Jia Chen, Jiachen Li, Evangelos Papalexakis, Aichi Chien, Yue Dong | Published: 2024-05-24
Content Moderation
Prompt Injection
Defense Method

A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions

Authors: Mohammed Hassanin, Nour Moustafa | Published: 2024-05-23
LLM Security
Cybersecurity
Prompt Injection