Prompt Injection

JULI: Jailbreak Large Language Models by Self-Introspection

Authors: Jesson Wang, Zhanhao Hu, David Wagner | Published: 2025-05-17 | Updated: 2025-05-20
API Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Dark LLMs: The Growing Threat of Unaligned AI Models

Authors: Michael Fire, Yitzhak Elbazis, Adi Wasenstein, Lior Rokach | Published: 2025-05-15
Disabling Safety Mechanisms of LLM
Prompt Injection
Large Language Model

Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data

Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-05-15
LLM Security
Prompt Injection
Large Language Model

PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization

Authors: Yidan Wang, Yanan Cao, Yubing Ren, Fang Fang, Zheng Lin, Binxing Fang | Published: 2025-05-15
Disabling Safety Mechanisms of LLM
Prompt Injection
Privacy Protection in Machine Learning

SecReEvalBench: A Multi-turned Security Resilience Evaluation Benchmark for Large Language Models

Authors: Huining Cui, Wei Liu | Published: 2025-05-12
LLM Security
Prompt Injection
Prompt leaking

Security through the Eyes of AI: How Visualization is Shaping Malware Detection

Authors: Asmitha K. A., Matteo Brosolo, Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A., Muhammed Shafi K. P | Published: 2025-05-12
Prompt Injection
Malware Classification
Adversarial Example Detection

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models

Authors: Haoran Gu, Handing Wang, Yi Mei, Mengjie Zhang, Yaochu Jin | Published: 2025-05-12
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs

Authors: Chetan Pathade | Published: 2025-05-07 | Updated: 2025-05-13
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Safeguard-by-Development: A Privacy-Enhanced Development Paradigm for Multi-Agent Collaboration Systems

Authors: Jian Cui, Zichuan Li, Luyi Xing, Xiaojing Liao | Published: 2025-05-07 | Updated: 2025-06-24
Privacy Protection
Privacy protection framework
Prompt Injection

LlamaFirewall: An open source guardrail system for building secure AI agents

Authors: Sahana Chennabasappa, Cyrus Nikolaidis, Daniel Song, David Molnar, Stephanie Ding, Shengye Wan, Spencer Whitman, Lauren Deason, Nicholas Doucette, Abraham Montilla, Alekhya Gampa, Beto de Paola, Dominik Gabi, James Crnkovich, Jean-Christophe Testud, Kat He, Rashnil Chaturvedi, Wu Zhou, Joshua Saxe | Published: 2025-05-06
LLM Security
Alignment
Prompt Injection