Quantized Approximate Signal Processing (QASP): Towards Homomorphic Encryption for audio Authors: Tu Duyen Nguyen, Adrien Lesage, Clotilde Cantini, Rachid Riad | Published: 2025-05-15 Quantized Neural Network音声データ処理システムSpeech Recognition System 2025.05.15 2025.05.28 Literature Database
AutoPentest: Enhancing Vulnerability Management With Autonomous LLM Agents Authors: Julius Henke | Published: 2025-05-15 LLM SecurityRAGIndirect Prompt Injection 2025.05.15 2025.05.28 Literature Database
Private Transformer Inference in MLaaS: A Survey Authors: Yang Li, Xinyu Zhou, Yitong Wang, Liangxin Qian, Jun Zhao | Published: 2025-05-15 Encryption TechnologyMachine LearningComputational Consistency 2025.05.15 2025.05.28 Literature Database
Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning Authors: Francesco Diana, André Nusser, Chuan Xu, Giovanni Neglia | Published: 2025-05-15 Prompt leakingModel Extraction AttackExploratory Attack 2025.05.15 2025.05.28 Literature Database
One Shot Dominance: Knowledge Poisoning Attack on Retrieval-Augmented Generation Systems Authors: Zhiyuan Chang, Mingyang Li, Xiaojun Jia, Junjie Wang, Yuekai Huang, Ziyou Jiang, Yang Liu, Qing Wang | Published: 2025-05-15 | Updated: 2025-05-20 Poisoning attack on RAGPoisoningPoisoning Attack 2025.05.15 2025.05.28 Literature Database
Dark LLMs: The Growing Threat of Unaligned AI Models Authors: Michael Fire, Yitzhak Elbazis, Adi Wasenstein, Lior Rokach | Published: 2025-05-15 Disabling Safety Mechanisms of LLMPrompt InjectionLarge Language Model 2025.05.15 2025.05.28 Literature Database
Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-05-15 LLM SecurityPrompt InjectionLarge Language Model 2025.05.15 2025.05.28 Literature Database
From Trade-off to Synergy: A Versatile Symbiotic Watermarking Framework for Large Language Models Authors: Yidan Wang, Yubing Ren, Yanan Cao, Binxing Fang | Published: 2025-05-15 Model DoSDigital Watermarking for Generative AIWatermark Removal Technology 2025.05.15 2025.05.28 Literature Database
PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization Authors: Yidan Wang, Yanan Cao, Yubing Ren, Fang Fang, Zheng Lin, Binxing Fang | Published: 2025-05-15 Disabling Safety Mechanisms of LLMPrompt InjectionPrivacy Protection in Machine Learning 2025.05.15 2025.05.28 Literature Database
Adversarial Suffix Filtering: a Defense Pipeline for LLMs Authors: David Khachaturov, Robert Mullins | Published: 2025-05-14 Prompt validation倫理基準遵守Attack Detection Method 2025.05.14 2025.05.28 Literature Database