Literature Database

Quantized Approximate Signal Processing (QASP): Towards Homomorphic Encryption for audio

Authors: Tu Duyen Nguyen, Adrien Lesage, Clotilde Cantini, Rachid Riad | Published: 2025-05-15
Quantized Neural Network
音声データ処理システム
Speech Recognition System

AutoPentest: Enhancing Vulnerability Management With Autonomous LLM Agents

Authors: Julius Henke | Published: 2025-05-15
LLM Security
RAG
Indirect Prompt Injection

Private Transformer Inference in MLaaS: A Survey

Authors: Yang Li, Xinyu Zhou, Yitong Wang, Liangxin Qian, Jun Zhao | Published: 2025-05-15
Encryption Technology
Machine Learning
Computational Consistency

Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning

Authors: Francesco Diana, André Nusser, Chuan Xu, Giovanni Neglia | Published: 2025-05-15
Prompt leaking
Model Extraction Attack
Exploratory Attack

One Shot Dominance: Knowledge Poisoning Attack on Retrieval-Augmented Generation Systems

Authors: Zhiyuan Chang, Mingyang Li, Xiaojun Jia, Junjie Wang, Yuekai Huang, Ziyou Jiang, Yang Liu, Qing Wang | Published: 2025-05-15 | Updated: 2025-05-20
Poisoning attack on RAG
Poisoning
Poisoning Attack

Dark LLMs: The Growing Threat of Unaligned AI Models

Authors: Michael Fire, Yitzhak Elbazis, Adi Wasenstein, Lior Rokach | Published: 2025-05-15
Disabling Safety Mechanisms of LLM
Prompt Injection
Large Language Model

Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data

Authors: Adel ElZemity, Budi Arief, Shujun Li | Published: 2025-05-15
LLM Security
Prompt Injection
Large Language Model

From Trade-off to Synergy: A Versatile Symbiotic Watermarking Framework for Large Language Models

Authors: Yidan Wang, Yubing Ren, Yanan Cao, Binxing Fang | Published: 2025-05-15
Model DoS
Digital Watermarking for Generative AI
Watermark Removal Technology

PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization

Authors: Yidan Wang, Yanan Cao, Yubing Ren, Fang Fang, Zheng Lin, Binxing Fang | Published: 2025-05-15
Disabling Safety Mechanisms of LLM
Prompt Injection
Privacy Protection in Machine Learning

Adversarial Suffix Filtering: a Defense Pipeline for LLMs

Authors: David Khachaturov, Robert Mullins | Published: 2025-05-14
Prompt validation
倫理基準遵守
Attack Detection Method