Poisoning

Input Hessian Regularization of Neural Networks

Authors: Waleed Mustafa, Robert A. Vandermeulen, Marius Kloft | Published: 2020-09-14
Poisoning
Robust Regression
Adversarial Training

A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses

Authors: Ambar Pal, René Vidal | Published: 2020-09-14 | Updated: 2020-11-11
Game Theory
Poisoning
Adversarial Training

Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent

Authors: Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Tolga Tasdizen | Published: 2020-09-10 | Updated: 2023-04-20
Poisoning
Performance Evaluation
Adversarial Attack Methods

A black-box adversarial attack for poisoning clustering

Authors: Antonio Emanuele Cinà, Alessandro Torcinovich, Marcello Pelillo | Published: 2020-09-09 | Updated: 2021-11-10
Backdoor Attack
Poisoning
Content Specialized for Toxicity Attacks

Local and Central Differential Privacy for Robustness and Privacy in Federated Learning

Authors: Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro | Published: 2020-09-08 | Updated: 2022-05-27
Backdoor Attack
Poisoning
Membership Disclosure Risk

Detection Defense Against Adversarial Attacks with Saliency Map

Authors: Dengpan Ye, Chuanxi Chen, Changrui Liu, Hao Wang, Shunzhi Jiang | Published: 2020-09-06
Poisoning
Adversarial Example
Adversarial Attack Methods

Improving Resistance to Adversarial Deformations by Regularizing Gradients

Authors: Pengfei Xia, Bin Li | Published: 2020-08-29 | Updated: 2020-10-06
Poisoning
Adversarial Example
Adversarial attack

Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning

Authors: Yinghua Zhang, Yangqiu Song, Jian Liang, Kun Bai, Qiang Yang | Published: 2020-08-25
Poisoning
Adversarial Learning
Deep Learning

Defending Distributed Classifiers Against Data Poisoning Attacks

Authors: Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie | Published: 2020-08-21
Poisoning
Attack Method
Adversarial Learning

Defending Regression Learners Against Poisoning Attacks

Authors: Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz | Published: 2020-08-21
Backdoor Attack
Poisoning
Content Specialized for Toxicity Attacks