Poisoning

SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics

Authors: Jonathan Hayase, Weihao Kong, Raghav Somani, Sewoong Oh | Published: 2021-04-22
Backdoor Attack
Poisoning
Poisoning Attack

Mapping the Internet: Modelling Entity Interactions in Complex Heterogeneous Networks

Authors: Simon Mandlik, Tomas Pevny | Published: 2021-04-19 | Updated: 2022-06-08
Poisoning
Model Design
Machine Learning Technology

Defending Against Adversarial Denial-of-Service Data Poisoning Attacks

Authors: Nicolas M. Müller, Simon Roschmann, Konstantin Böttinger | Published: 2021-04-14 | Updated: 2021-11-30
Backdoor Attack
Poisoning
Poisoning Attack

Towards Causal Federated Learning For Enhanced Robustness and Privacy

Authors: Sreya Francis, Irene Tenison, Irina Rish | Published: 2021-04-14
Privacy Protection
Poisoning
Threat Model

Sparse Coding Frontend for Robust Neural Networks

Authors: Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow | Published: 2021-04-12
Poisoning
Adversarial Example Detection
Defense Mechanism

Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models

Authors: Neal Mangaokar, Jiameng Pu, Parantapa Bhattacharya, Chandan K. Reddy, Bimal Viswanath | Published: 2021-04-05
Poisoning
Watermarking Settings for Medical Data
Threat Model

Attribution of Gradient Based Adversarial Attacks for Reverse Engineering of Deceptions

Authors: Michael Goebel, Jason Bunk, Srinjoy Chattopadhyay, Lakshmanan Nataraj, Shivkumar Chandrasekaran, B. S. Manjunath | Published: 2021-03-19
Data Extraction and Analysis
Poisoning
Adversarial Attack Methods

Quantum federated learning through blind quantum computing

Authors: Weikang Li, Sirui Lu, Dong-Ling Deng | Published: 2021-03-15 | Updated: 2021-09-02
Privacy Risk Management
Poisoning
Quantum Classifier

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

Authors: Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi | Published: 2021-02-22 | Updated: 2022-05-05
Bayesian Classification
Poisoning
Adversarial Example

“What’s in the box?!”: Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models

Authors: Sahar Abdelnabi, Mario Fritz | Published: 2021-02-09 | Updated: 2021-03-09
Poisoning
Model Performance Evaluation
Attack Method