Poisoning

Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications

Authors: Ali Raza, Shujun Li, Kim-Phuc Tran, Ludovic Koehl, Kim Duc Tran | Published: 2022-07-18 | Updated: 2025-03-25
Poisoning
Malicious Client
Detection of Poisonous Data

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

Authors: Luca Demetrio, Battista Biggio, Fabio Roli | Published: 2022-07-12
Attack Methods against DFL
Poisoning
Malware Propagation Means

Efficient and Privacy Preserving Group Signature for Federated Learning

Authors: Sneha Kanchan, Jae Won Jang, Jun Yong Yoon, Bong Jun Choi | Published: 2022-07-12 | Updated: 2022-07-15
Group Signature
Poisoning
Communication Efficiency

Statistical Detection of Adversarial examples in Blockchain-based Federated Forest In-vehicle Network Intrusion Detection Systems

Authors: Ibrahim Aliyu, Selinde van Engelenburg, Muhammed Bashir Muazu, Jinsul Kim, Chang Gyoon Lim | Published: 2022-07-11
Poisoning
Attack Type
Adversarial Learning

Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms

Authors: Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif | Published: 2022-07-05
Privacy Protection
Poisoning
Defense Method

Defending against the Label-flipping Attack in Federated Learning

Authors: Najeeb Moharram Jebreel, Josep Domingo-Ferrer, David Sánchez, Alberto Blanco-Justicia | Published: 2022-07-05
Algorithm Design
Poisoning
Defense Method

FL-Defender: Combating Targeted Attacks in Federated Learning

Authors: Najeeb Jebreel, Josep Domingo-Ferrer | Published: 2022-07-02
Attack Methods against DFL
Algorithm Design
Poisoning

I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences

Authors: Daryna Oliynyk, Rudolf Mayer, Andreas Rauber | Published: 2022-06-16 | Updated: 2023-06-06
Poisoning
Membership Inference
Adversarial Attack Methods

Deep Leakage from Model in Federated Learning

Authors: Zihao Zhao, Mengen Luo, Wenbo Ding | Published: 2022-06-10
Attack Methods against DFL
Poisoning
Federated Learning

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

Authors: Kai Yue, Richeng Jin, Chau-Wai Wong, Dror Baron, Huaiyu Dai | Published: 2022-06-08 | Updated: 2022-10-14
Attack Methods against DFL
Poisoning
Reconstruction Durability