Defending against Backdoors in Federated Learning with Robust Learning Rate

Authors: Mustafa Safa Ozdayi, Murat Kantarcioglu, Yulia R. Gel | Published: 2020-07-07 | Updated: 2021-07-29

Backdoor attacks and defenses in feature-partitioned collaborative learning

Authors: Yang Liu, Zhihao Yi, Tianjian Chen | Published: 2020-07-07

Stochastic Linear Bandits Robust to Adversarial Attacks

Authors: Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett | Published: 2020-07-07 | Updated: 2020-10-27

Robust Learning with Frequency Domain Regularization

Authors: Weiyu Guo, Yidong Ouyang | Published: 2020-07-07

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Authors: Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem | Published: 2020-07-07 | Updated: 2020-07-18

Sharing Models or Coresets: A Study based on Membership Inference Attack

Authors: Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang, Kevin S. Chan | Published: 2020-07-06

Descent-to-Delete: Gradient-Based Methods for Machine Unlearning

Authors: Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi | Published: 2020-07-06

Certifying Decision Trees Against Evasion Attacks by Program Analysis

Authors: Stefano Calzavara, Pietro Ferrara, Claudio Lucchese | Published: 2020-07-06

Black-box Adversarial Example Generation with Normalizing Flows

Authors: Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie | Published: 2020-07-06

On Data Augmentation and Adversarial Risk: An Empirical Analysis

Authors: Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer | Published: 2020-07-06