Prompt Injection

What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks

Authors: Nathalie Kirch, Constantin Weisser, Severin Field, Helen Yannakoudakis, Stephen Casper | Published: 2024-11-02 | Updated: 2025-05-14
Disabling Safety Mechanisms of LLM
Prompt Injection
Exploratory Attack

Jailbreaking and Mitigation of Vulnerabilities in Large Language Models

Authors: Benji Peng, Keyu Chen, Qian Niu, Ziqian Bi, Ming Liu, Pohsun Feng, Tianyang Wang, Lawrence K. Q. Yan, Yizhu Wen, Yichao Zhang, Caitlyn Heqi Yin | Published: 2024-10-20 | Updated: 2025-05-08
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Denial-of-Service Poisoning Attacks against Large Language Models

Authors: Kuofeng Gao, Tianyu Pang, Chao Du, Yong Yang, Shu-Tao Xia, Min Lin | Published: 2024-10-14
Prompt Injection
Model DoS
Resource Scarcity Issues

On Calibration of LLM-based Guard Models for Reliable Content Moderation

Authors: Hongfu Liu, Hengguan Huang, Hao Wang, Xiangming Gu, Ye Wang | Published: 2024-10-14
LLM Performance Evaluation
Content Moderation
Prompt Injection

Can LLMs be Scammed? A Baseline Measurement Study

Authors: Udari Madhushani Sehwag, Kelly Patel, Francesca Mosca, Vineeth Ravi, Jessica Staddon | Published: 2024-10-14
LLM Performance Evaluation
Prompt Injection
Evaluation Method

Survival of the Safest: Towards Secure Prompt Optimization through Interleaved Multi-Objective Evolution

Authors: Ankita Sinha, Wendi Cui, Kamalika Das, Jiaxin Zhang | Published: 2024-10-12
Prompt Injection
Multi-Objective Prompt Optimization

Can a large language model be a gaslighter?

Authors: Wei Li, Luyao Zhu, Yang Song, Ruixi Lin, Rui Mao, Yang You | Published: 2024-10-11
Prompt Injection
Safety Alignment
Attack Method

F2A: An Innovative Approach for Prompt Injection by Utilizing Feign Security Detection Agents

Authors: Yupeng Ren | Published: 2024-10-11 | Updated: 2024-10-14
Prompt Injection
Attack Evaluation
Attack Method

PILLAR: an AI-Powered Privacy Threat Modeling Tool

Authors: Majid Mollaeefar, Andrea Bissoli, Silvio Ranise | Published: 2024-10-11
Privacy Protection
Privacy Protection Method
Prompt Injection

APOLLO: A GPT-based tool to detect phishing emails and generate explanations that warn users

Authors: Giuseppe Desolda, Francesco Greco, Luca Viganò | Published: 2024-10-10
Phishing Detection
Prompt Injection
User Education