Poisoning

Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications

Authors: Quan Zhang, Binqi Zeng, Chijin Zhou, Gwihwan Go, Heyuan Shi, Yu Jiang | Published: 2024-04-26
Poisoning attack on RAG
Prompt leaking
Poisoning

An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat Landscape

Authors: Sifat Muhammad Abdullah, Aravind Cheruvu, Shravya Kanchi, Taejoong Chung, Peng Gao, Murtuza Jadliwala, Bimal Viswanath | Published: 2024-04-24
Poisoning
Watermark Evaluation
Defense Method

A Comparative Analysis of Adversarial Robustness for Quantum and Classical Machine Learning Models

Authors: Maximilian Wendlinger, Kilian Tscharke, Pascal Debus | Published: 2024-04-24
Poisoning
Adversarial Training
Quantum Framework

Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models

Authors: Jiaming He, Wenbo Jiang, Guanyu Hou, Wenshu Fan, Rui Zhang, Hongwei Li | Published: 2024-04-23 | Updated: 2025-01-08
LLM Security
Backdoor Attack
Poisoning

Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning

Authors: Emre Ozfatura, Kerem Ozfatura, Alptekin Kupcu, Deniz Gunduz | Published: 2024-04-09
Poisoning
Attack Method
Defense Method

Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning

Authors: Yu Bi, Yekai Li, Xuan Feng, Xianghang Mi | Published: 2024-04-08
Spam Detection
Poisoning
Federated Learning

Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning

Authors: K Naveen Kumar, C Krishna Mohan, Aravind Machiry | Published: 2024-04-05
Poisoning
Federated Learning
Defense Method

Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models

Authors: Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, Nicholas Carlini | Published: 2024-04-01
Backdoor Attack
Poisoning
Membership Inference

A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks

Authors: Orson Mengara | Published: 2024-03-29 | Updated: 2024-04-07
Dataset Generation
Backdoor Attack
Poisoning

Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing

Authors: Ehsan Lari, Reza Arablouei, Vinay Chakravarthi Gogineni, Stefan Werner | Published: 2024-03-19 | Updated: 2024-08-16
Poisoning
Communication Efficiency
Federated Learning