Poisoning

Improved Adversarial Training via Learned Optimizer

Authors: Yuanhao Xiong, Cho-Jui Hsieh | Published: 2020-04-25
Poisoning
Optimization Problem
Adaptive Adversarial Training

A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers

Authors: Arka Ghosh, Sankha Subhra Mullick, Shounak Datta, Swagatam Das, Rammohan Mallipeddi, Asit Kr. Das | Published: 2020-04-24 | Updated: 2021-09-09
Poisoning
Adversarial Attack Methods
Optimization Problem

Adversarial Attacks and Defenses: An Interpretation Perspective

Authors: Ninghao Liu, Mengnan Du, Ruocheng Guo, Huan Liu, Xia Hu | Published: 2020-04-23 | Updated: 2020-10-07
Poisoning
Adversarial Example
Adversarial Attack Methods

How to compare adversarial robustness of classifiers from a global perspective

Authors: Niklas Risse, Christina Göpfert, Jan Philip Göpfert | Published: 2020-04-22 | Updated: 2020-10-15
Poisoning
Robustness Analysis
Evaluation Method

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Authors: Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu | Published: 2020-04-22 | Updated: 2020-04-23
Privacy Enhancing Technology
Poisoning
Attack Type

Headless Horseman: Adversarial Attacks on Transfer Learning Models

Authors: Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu | Published: 2020-04-20
Poisoning
Adversarial Perturbation Techniques
Machine Learning

Data Poisoning Attacks on Federated Machine Learning

Authors: Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Ji Liu | Published: 2020-04-19
Poisoning
Attack Scenario Analysis
Machine Learning

Poisoning Attacks on Algorithmic Fairness

Authors: David Solans, Battista Biggio, Carlos Castillo | Published: 2020-04-15 | Updated: 2020-06-26
Algorithm Fairness
Poisoning
Optimization Methods

Weight Poisoning Attacks on Pre-trained Models

Authors: Keita Kurita, Paul Michel, Graham Neubig | Published: 2020-04-14
Backdoor Attack
Poisoning
Adversarial Learning

Towards Federated Learning With Byzantine-Robust Client Weighting

Authors: Amit Portnoy, Yoav Tirosh, Danny Hendler | Published: 2020-04-10 | Updated: 2021-05-18
Poisoning
Robustness Improvement Method
Optimization Problem