ByGARS: Byzantine SGD with Arbitrary Number of Attackers

Authors: Jayanth Regatti, Hao Chen, Abhishek Gupta | Published: 2020-06-24 | Updated: 2020-12-07

Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks

Authors: Francesco Croce, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, Matthias Hein | Published: 2020-06-23 | Updated: 2022-02-08

RayS: A Ray Searching Method for Hard-label Adversarial Attack

Authors: Jinghui Chen, Quanquan Gu | Published: 2020-06-23 | Updated: 2020-09-05

Perceptual Adversarial Robustness: Defense Against Unseen Threat Models

Authors: Cassidy Laidlaw, Sahil Singla, Soheil Feizi | Published: 2020-06-22 | Updated: 2021-07-04

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

Authors: Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P Dickerson, Tom Goldstein | Published: 2020-06-22 | Updated: 2021-06-17

Learning to Generate Noise for Multi-Attack Robustness

Authors: Divyam Madaan, Jinwoo Shin, Sung Ju Hwang | Published: 2020-06-22 | Updated: 2021-06-24

With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models

Authors: Jialin Wen, Benjamin Zi Hao Zhao, Minhui Xue, Alina Oprea, Haifeng Qian | Published: 2020-06-21 | Updated: 2021-05-19

Free-rider Attacks on Model Aggregation in Federated Learning

Authors: Yann Fraboni, Richard Vidal, Marco Lorenzi | Published: 2020-06-21 | Updated: 2021-02-22

Graph Backdoor

Authors: Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang | Published: 2020-06-21 | Updated: 2021-08-10

Network Moments: Extensions and Sparse-Smooth Attacks

Authors: Modar Alfadly, Adel Bibi, Emilio Botero, Salman Alsubaihi, Bernard Ghanem | Published: 2020-06-21