Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models Authors: Yisheng Zhong, Yizhu Wen, Junfeng Guo, Mehran Kafai, Heng Huang, Hanqing Guo, Zhuangdi Zhu | Published: 2025-05-19 LLM SecurityIndirect Prompt InjectionPrivacy Management 2025.05.19 2025.05.28 Literature Database
S3C2 Summit 2024-09: Industry Secure Software Supply Chain Summit Authors: Imranur Rahman, Yasemin Acar, Michel Cukier, William Enck, Christian Kastner, Alexandros Kapravelos, Dominik Wermke, Laurie Williams | Published: 2025-05-15 LLM Securityソフトウェア供給チェーンセキュリティ教育と自動化のバランス 2025.05.15 2025.05.28 Literature Database
AutoPentest: Enhancing Vulnerability Management With Autonomous LLM Agents Authors: Julius Henke | Published: 2025-05-15 LLM SecurityRAGIndirect Prompt Injection 2025.05.15 2025.05.28 Literature Database
Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-05-15 LLM SecurityPrompt InjectionLarge Language Model 2025.05.15 2025.05.28 Literature Database
Securing RAG: A Risk Assessment and Mitigation Framework Authors: Lukas Ammann, Sara Ott, Christoph R. Landolt, Marco P. Lehmann | Published: 2025-05-13 | Updated: 2025-05-21 LLM SecurityRAGPoisoning attack on RAG 2025.05.13 2025.05.28 Literature Database
SecReEvalBench: A Multi-turned Security Resilience Evaluation Benchmark for Large Language Models Authors: Huining Cui, Wei Liu | Published: 2025-05-12 LLM SecurityPrompt InjectionPrompt leaking 2025.05.12 2025.05.28 Literature Database
Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption Authors: Jordan Frery, Roman Bredehoft, Jakub Klemsa, Arthur Meyre, Andrei Stoian | Published: 2025-05-12 LLM SecurityCryptographyMachine Learning Technology 2025.05.12 2025.05.28 Literature Database
One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models Authors: Haoran Gu, Handing Wang, Yi Mei, Mengjie Zhang, Yaochu Jin | Published: 2025-05-12 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.12 2025.05.28 Literature Database
LLM-Text Watermarking based on Lagrange Interpolation Authors: Jarosław Janas, Paweł Morawiecki, Josef Pieprzyk | Published: 2025-05-09 | Updated: 2025-05-13 LLM SecurityPrompt leakingDigital Watermarking for Generative AI 2025.05.09 2025.05.28 Literature Database
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs Authors: Chetan Pathade | Published: 2025-05-07 | Updated: 2025-05-13 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.07 2025.05.28 Literature Database