LLM Security

Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models

Authors: Yisheng Zhong, Yizhu Wen, Junfeng Guo, Mehran Kafai, Heng Huang, Hanqing Guo, Zhuangdi Zhu | Published: 2025-05-19
LLM Security
Indirect Prompt Injection
Privacy Management

S3C2 Summit 2024-09: Industry Secure Software Supply Chain Summit

Authors: Imranur Rahman, Yasemin Acar, Michel Cukier, William Enck, Christian Kastner, Alexandros Kapravelos, Dominik Wermke, Laurie Williams | Published: 2025-05-15
LLM Security
ソフトウェア供給チェーンセキュリティ
教育と自動化のバランス

AutoPentest: Enhancing Vulnerability Management With Autonomous LLM Agents

Authors: Julius Henke | Published: 2025-05-15
LLM Security
RAG
Indirect Prompt Injection

Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data

Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-05-15
LLM Security
Prompt Injection
Large Language Model

Securing RAG: A Risk Assessment and Mitigation Framework

Authors: Lukas Ammann, Sara Ott, Christoph R. Landolt, Marco P. Lehmann | Published: 2025-05-13 | Updated: 2025-05-21
LLM Security
RAG
Poisoning attack on RAG

SecReEvalBench: A Multi-turned Security Resilience Evaluation Benchmark for Large Language Models

Authors: Huining Cui, Wei Liu | Published: 2025-05-12
LLM Security
Prompt Injection
Prompt leaking

Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption

Authors: Jordan Frery, Roman Bredehoft, Jakub Klemsa, Arthur Meyre, Andrei Stoian | Published: 2025-05-12
LLM Security
Cryptography
Machine Learning Technology

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models

Authors: Haoran Gu, Handing Wang, Yi Mei, Mengjie Zhang, Yaochu Jin | Published: 2025-05-12
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

LLM-Text Watermarking based on Lagrange Interpolation

Authors: Jarosław Janas, Paweł Morawiecki, Josef Pieprzyk | Published: 2025-05-09 | Updated: 2025-05-13
LLM Security
Prompt leaking
Digital Watermarking for Generative AI

Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs

Authors: Chetan Pathade | Published: 2025-05-07 | Updated: 2025-05-13
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection