Poisoning

Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models

Authors: Neal Mangaokar, Jiameng Pu, Parantapa Bhattacharya, Chandan K. Reddy, Bimal Viswanath | Published: 2021-04-05
Poisoning
Watermarking Settings for Medical Data
Threat Model

Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions

Authors: Michael Goebel, Jason Bunk, Srinjoy Chattopadhyay, Lakshmanan Nataraj, Shivkumar Chandrasekaran, B. S. Manjunath | Published: 2021-03-19
Data Extraction and Analysis
Poisoning
Adversarial Attack Methods

Quantum federated learning through blind quantum computing

Authors: Weikang Li, Sirui Lu, Dong-Ling Deng | Published: 2021-03-15 | Updated: 2021-09-02
Privacy Risk Management
Poisoning
Quantum Classifier

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

Authors: Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi | Published: 2021-02-22 | Updated: 2022-05-05
Bayesian Classification
Poisoning
Adversarial Example

“What’s in the box?!”: Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models

Authors: Sahar Abdelnabi, Mario Fritz | Published: 2021-02-09 | Updated: 2021-03-09
Poisoning
Model Performance Evaluation
Attack Method

Quantifying and Mitigating Privacy Risks of Contrastive Learning

Authors: Xinlei He, Yang Zhang | Published: 2021-02-08 | Updated: 2021-09-21
Poisoning
Membership Inference
Label Inference Attack

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Authors: Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang | Published: 2021-02-04 | Updated: 2021-10-06
Poisoning
Membership Inference
Model Performance Evaluation

FLAME: Taming Backdoors in Federated Learning (Extended Version 1)

Authors: Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, Thomas Schneider | Published: 2021-01-06 | Updated: 2023-08-05
Backdoor Attack Techniques
Poisoning
Defense Effectiveness Analysis

Local Competition and Stochasticity for Adversarial Robustness in Deep Learning

Authors: Konstantinos P. Panousis, Sotirios Chatzis, Antonios Alexos, Sergios Theodoridis | Published: 2021-01-04 | Updated: 2021-03-29
Poisoning
Model Performance Evaluation
Deep Learning Method

Active Learning Under Malicious Mislabeling and Poisoning Attacks

Authors: Jing Lin, Ryan Luley, Kaiqi Xiong | Published: 2021-01-01 | Updated: 2021-09-02
Backdoor Attack
Poisoning
Performance Evaluation