JULI: Jailbreak Large Language Models by Self-Introspection Authors: Jesson Wang, Zhanhao Hu, David Wagner | Published: 2025-05-17 | Updated: 2025-05-20 API SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.17 2025.05.28 Literature Database
Phare: A Safety Probe for Large Language Models Authors: Pierre Le Jeune, Benoît Malézieux, Weixuan Xiao, Matteo Dora | Published: 2025-05-16 | Updated: 2025-05-19 RAGBias Mitigation TechniquesHallucination 2025.05.16 2025.05.28 Literature Database
S3C2 Summit 2024-09: Industry Secure Software Supply Chain Summit Authors: Imranur Rahman, Yasemin Acar, Michel Cukier, William Enck, Christian Kastner, Alexandros Kapravelos, Dominik Wermke, Laurie Williams | Published: 2025-05-15 LLM Securityソフトウェア供給チェーンセキュリティ教育と自動化のバランス 2025.05.15 2025.05.28 Literature Database
Quantized Approximate Signal Processing (QASP): Towards Homomorphic Encryption for audio Authors: Tu Duyen Nguyen, Adrien Lesage, Clotilde Cantini, Rachid Riad | Published: 2025-05-15 Quantized Neural Network音声データ処理システムSpeech Recognition System 2025.05.15 2025.05.28 Literature Database
AutoPentest: Enhancing Vulnerability Management With Autonomous LLM Agents Authors: Julius Henke | Published: 2025-05-15 LLM SecurityRAGIndirect Prompt Injection 2025.05.15 2025.05.28 Literature Database
Private Transformer Inference in MLaaS: A Survey Authors: Yang Li, Xinyu Zhou, Yitong Wang, Liangxin Qian, Jun Zhao | Published: 2025-05-15 Encryption TechnologyMachine LearningComputational Consistency 2025.05.15 2025.05.28 Literature Database
Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning Authors: Francesco Diana, André Nusser, Chuan Xu, Giovanni Neglia | Published: 2025-05-15 Prompt leakingModel Extraction AttackExploratory Attack 2025.05.15 2025.05.28 Literature Database
One Shot Dominance: Knowledge Poisoning Attack on Retrieval-Augmented Generation Systems Authors: Zhiyuan Chang, Mingyang Li, Xiaojun Jia, Junjie Wang, Yuekai Huang, Ziyou Jiang, Yang Liu, Qing Wang | Published: 2025-05-15 | Updated: 2025-05-20 Poisoning attack on RAGPoisoningPoisoning Attack 2025.05.15 2025.05.28 Literature Database
Dark LLMs: The Growing Threat of Unaligned AI Models Authors: Michael Fire, Yitzhak Elbazis, Adi Wasenstein, Lior Rokach | Published: 2025-05-15 Disabling Safety Mechanisms of LLMPrompt InjectionLarge Language Model 2025.05.15 2025.05.28 Literature Database
Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-05-15 LLM SecurityPrompt InjectionLarge Language Model 2025.05.15 2025.05.28 Literature Database