Prompt Injection

Security through the Eyes of AI: How Visualization is Shaping Malware Detection

Authors: Asmitha K. A., Matteo Brosolo, Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A., Muhammed Shafi K. P | Published: 2025-05-12
Prompt Injection
Malware Classification
Adversarial Example Detection

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models

Authors: Haoran Gu, Handing Wang, Yi Mei, Mengjie Zhang, Yaochu Jin | Published: 2025-05-12
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs

Authors: Chetan Pathade | Published: 2025-05-07 | Updated: 2025-05-13
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

LlamaFirewall: An open source guardrail system for building secure AI agents

Authors: Sahana Chennabasappa, Cyrus Nikolaidis, Daniel Song, David Molnar, Stephanie Ding, Shengye Wan, Spencer Whitman, Lauren Deason, Nicholas Doucette, Abraham Montilla, Alekhya Gampa, Beto de Paola, Dominik Gabi, James Crnkovich, Jean-Christophe Testud, Kat He, Rashnil Chaturvedi, Wu Zhou, Joshua Saxe | Published: 2025-05-06
LLM Security
Alignment
Prompt Injection

Directed Greybox Fuzzing via Large Language Model

Authors: Hanxiang Xu, Yanjie Zhao, Haoyu Wang | Published: 2025-05-06
RAG
Prompt Injection
Vulnerability Analysis

LLM-Based Threat Detection and Prevention Framework for IoT Ecosystems

Authors: Yazan Otoum, Arghavan Asad, Amiya Nayak | Published: 2025-05-01
Bias Detection in AI Output
LLM Performance Evaluation
Prompt Injection

An Empirical Study on the Effectiveness of Large Language Models for Binary Code Understanding

Authors: Xiuwei Shang, Zhenkan Fu, Shaoyin Cheng, Guoqiang Chen, Gangyang Li, Li Hu, Weiming Zhang, Nenghai Yu | Published: 2025-04-30
Program Analysis
Prompt Injection
Prompt leaking

LASHED: LLMs And Static Hardware Analysis for Early Detection of RTL Bugs

Authors: Baleegh Ahmad, Hammond Pearce, Ramesh Karri, Benjamin Tan | Published: 2025-04-30
Program Analysis
Prompt Injection
Vulnerability detection

XBreaking: Explainable Artificial Intelligence for Jailbreaking LLMs

Authors: Marco Arazzi, Vignesh Kumar Kembu, Antonino Nocera, Vinod P | Published: 2025-04-30
Disabling Safety Mechanisms of LLM
Prompt Injection
Explanation Method

ACE: A Security Architecture for LLM-Integrated App Systems

Authors: Evan Li, Tushin Mallick, Evan Rose, William Robertson, Alina Oprea, Cristina Nita-Rotaru | Published: 2025-04-29 | Updated: 2025-05-07
Indirect Prompt Injection
Prompt Injection
Information Flow Analysis