Poisoning

Quantifying and Mitigating Privacy Risks of Contrastive Learning

Authors: Xinlei He, Yang Zhang | Published: 2021-02-08 | Updated: 2021-09-21
Poisoning
Membership Inference
Label Inference Attack

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Authors: Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang | Published: 2021-02-04 | Updated: 2021-10-06
Poisoning
Membership Inference
Model Performance Evaluation

FLAME: Taming Backdoors in Federated Learning (Extended Version 1)

Authors: Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, Thomas Schneider | Published: 2021-01-06 | Updated: 2023-08-05
Backdoor Attack Techniques
Poisoning
Defense Effectiveness Analysis

Local Competition and Stochasticity for Adversarial Robustness in Deep Learning

Authors: Konstantinos P. Panousis, Sotirios Chatzis, Antonios Alexos, Sergios Theodoridis | Published: 2021-01-04 | Updated: 2021-03-29
Poisoning
Model Performance Evaluation
Deep Learning Method

Active Learning Under Malicious Mislabeling and Poisoning Attacks

Authors: Jing Lin, Ryan Luley, Kaiqi Xiong | Published: 2021-01-01 | Updated: 2021-09-02
Backdoor Attack
Poisoning
Performance Evaluation

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

Authors: Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein | Published: 2020-12-18 | Updated: 2021-03-31
Backdoor Attack
Poisoning
Model Protection Methods

Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions

Authors: Alberto Blanco-Justicia, Josep Domingo-Ferrer, Sergio Martínez, David Sánchez, Adrian Flanagan, Kuan Eeik Tan | Published: 2020-12-12
Attack Methods against DFL
Poisoning
Federated Learning

I-GCN: Robust Graph Convolutional Network via Influence Mechanism

Authors: Haoxi Zhan, Xiaobing Pei | Published: 2020-12-11
Poisoning
Role of Machine Learning
Knowledge Graph

FAT: Federated Adversarial Training

Authors: Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Beat Buesser | Published: 2020-12-03
Backdoor Attack
Poisoning
Adversarial Training

Practical Privacy Attacks on Vertical Federated Learning

Authors: Haiqin Weng, Juntao Zhang, Xingjun Ma, Feng Xue, Tao Wei, Shouling Ji, Zhiyuan Zong | Published: 2020-11-18 | Updated: 2022-07-22
Data Privacy Assessment
Poisoning
Attack Type