Stochastic Linear Bandits Robust to Adversarial Attacks

Authors: Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett | Published: 2020-07-07 | Updated: 2020-10-27

Robust Learning with Frequency Domain Regularization

Authors: Weiyu Guo, Yidong Ouyang | Published: 2020-07-07

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Authors: Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem | Published: 2020-07-07 | Updated: 2020-07-18

Sharing Models or Coresets: A Study based on Membership Inference Attack

Authors: Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang, Kevin S. Chan | Published: 2020-07-06

Descent-to-Delete: Gradient-Based Methods for Machine Unlearning

Authors: Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi | Published: 2020-07-06

Certifying Decision Trees Against Evasion Attacks by Program Analysis

Authors: Stefano Calzavara, Pietro Ferrara, Claudio Lucchese | Published: 2020-07-06

Black-box Adversarial Example Generation with Normalizing Flows

Authors: Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie | Published: 2020-07-06

On Data Augmentation and Adversarial Risk: An Empirical Analysis

Authors: Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer | Published: 2020-07-06

Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain

Authors: Ihai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach | Published: 2020-07-05 | Updated: 2021-03-13

Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors

Authors: Zijian Jiang, Jianwen Zhou, Haiping Huang | Published: 2020-07-04 | Updated: 2020-12-23