Literature Database

Evaluations of Machine Learning Privacy Defenses are Misleading

Authors: Michael Aerni, Jie Zhang, Florian Tramèr | Published: 2024-04-26 | Updated: 2024-09-05
Privacy Protection Method
Membership Inference
Adversarial Example

Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications

Authors: Quan Zhang, Binqi Zeng, Chijin Zhou, Gwihwan Go, Heyuan Shi, Yu Jiang | Published: 2024-04-26
Poisoning attack on RAG
Prompt leaking
Poisoning

An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat Landscape

Authors: Sifat Muhammad Abdullah, Aravind Cheruvu, Shravya Kanchi, Taejoong Chung, Peng Gao, Murtuza Jadliwala, Bimal Viswanath | Published: 2024-04-24
Poisoning
Watermark Evaluation
Defense Method

Attacks on Third-Party APIs of Large Language Models

Authors: Wanru Zhao, Vidit Khazanchi, Haodi Xing, Xuanli He, Qiongkai Xu, Nicholas Donald Lane | Published: 2024-04-24
LLM Security
Prompt Injection
Attack Method

Guardians of the Quantum GAN

Authors: Archisman Ghosh, Debarshi Kundu, Avimita Chatterjee, Swaroop Ghosh | Published: 2024-04-24 | Updated: 2024-05-15
Watermarking
Security Analysis
Quantum Framework

A Comparative Analysis of Adversarial Robustness for Quantum and Classical Machine Learning Models

Authors: Maximilian Wendlinger, Kilian Tscharke, Pascal Debus | Published: 2024-04-24
Poisoning
Adversarial Training
Quantum Framework

From Local to Global: A Graph RAG Approach to Query-Focused Summarization

Authors: Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, Dasha Metropolitansky, Robert Osazuwa Ness, Jonathan Larson | Published: 2024-04-24 | Updated: 2025-02-19
RAG
Explainability of Graph Machine Learning
Data Extraction and Analysis

Act as a Honeytoken Generator! An Investigation into Honeytoken Generation with Large Language Models

Authors: Daniel Reti, Norman Becker, Tillmann Angeli, Anasuya Chattopadhyay, Daniel Schneider, Sebastian Vollmer, Hans D. Schotten | Published: 2024-04-24
LLM Performance Evaluation
Honeypot Technology
Prompt Injection

zkLLM: Zero Knowledge Proofs for Large Language Models

Authors: Haochen Sun, Jason Li, Hongyang Zhang | Published: 2024-04-24
Prompt Injection
Computational Efficiency
Watermark Robustness

Collaborative Heterogeneous Causal Inference Beyond Meta-analysis

Authors: Tianyu Guo, Sai Praneeth Karimireddy, Michael I. Jordan | Published: 2024-04-24
Algorithm
Watermarking
Bias