LLM Security

Securing RAG: A Risk Assessment and Mitigation Framework

Authors: Lukas Ammann, Sara Ott, Christoph R. Landolt, Marco P. Lehmann | Published: 2025-05-13
LLM Security
RAG
Poisoning attack on RAG

SecReEvalBench: A Multi-turned Security Resilience Evaluation Benchmark for Large Language Models

Authors: Huining Cui, Wei Liu | Published: 2025-05-12
LLM Security
Prompt Injection
Prompt leaking

Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption

Authors: Jordan Frery, Roman Bredehoft, Jakub Klemsa, Arthur Meyre, Andrei Stoian | Published: 2025-05-12
LLM Security
Cryptography
Machine Learning Technology

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models

Authors: Haoran Gu, Handing Wang, Yi Mei, Mengjie Zhang, Yaochu Jin | Published: 2025-05-12
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

LLM-Text Watermarking based on Lagrange Interpolation

Authors: Jarosław Janas, Paweł Morawiecki, Josef Pieprzyk | Published: 2025-05-09 | Updated: 2025-05-12
LLM Security
Prompt leaking
Digital Watermarking for Generative AI

Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs

Authors: Chetan Pathade | Published: 2025-05-07 | Updated: 2025-05-13
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Weaponizing Language Models for Cybersecurity Offensive Operations: Automating Vulnerability Assessment Report Validation; A Review Paper

Authors: Abdulrahman S Almuhaidib, Azlan Mohd Zain, Zalmiyah Zakaria, Izyan Izzati Kamsani, Abdulaziz S Almuhaidib | Published: 2025-05-07
LLM Security
Vulnerability Analysis

LLMs’ Suitability for Network Security: A Case Study of STRIDE Threat Modeling

Authors: AbdulAziz AbdulGhaffar, Ashraf Matrawy | Published: 2025-05-07
LLM Security
Performance Evaluation
Vulnerability Analysis

LlamaFirewall: An open source guardrail system for building secure AI agents

Authors: Sahana Chennabasappa, Cyrus Nikolaidis, Daniel Song, David Molnar, Stephanie Ding, Shengye Wan, Spencer Whitman, Lauren Deason, Nicholas Doucette, Abraham Montilla, Alekhya Gampa, Beto de Paola, Dominik Gabi, James Crnkovich, Jean-Christophe Testud, Kat He, Rashnil Chaturvedi, Wu Zhou, Joshua Saxe | Published: 2025-05-06
LLM Security
Alignment
Prompt Injection

Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms

Authors: Shuoming Zhang, Jiacheng Zhao, Ruiyuan Xu, Xiaobing Feng, Huimin Cui | Published: 2025-03-31
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection