Model-Targeted Poisoning Attacks with Provable Convergence

Authors: Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, Yuan Tian | Published: 2020-06-30 | Updated: 2021-04-21

Reducing Risk of Model Inversion Using Privacy-Guided Training

Authors: Abigail Goldsteen, Gilad Ezov, Ariel Farkash | Published: 2020-06-29

FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications

Authors: Yunfei Song, Tian Liu, Tongquan Wei, Xiangfeng Wang, Zhe Tao, Mingsong Chen | Published: 2020-06-28

Understanding Gradient Clipping in Private SGD: A Geometric Perspective

Authors: Xiangyi Chen, Zhiwei Steven Wu, Mingyi Hong | Published: 2020-06-27 | Updated: 2021-03-18

ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining

Authors: Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha | Published: 2020-06-26 | Updated: 2021-06-30

Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?

Authors: Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, Ting Liu | Published: 2020-06-26 | Updated: 2022-07-28

Orthogonal Deep Models As Defense Against Black-Box Attacks

Authors: Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian | Published: 2020-06-26

Deep Partition Aggregation: Provable Defense against General Poisoning Attacks

Authors: Alexander Levine, Soheil Feizi | Published: 2020-06-26 | Updated: 2021-03-18

Proper Network Interpretability Helps Adversarial Robustness in Classification

Authors: Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel | Published: 2020-06-26 | Updated: 2020-10-21

Can 3D Adversarial Logos Cloak Humans?

Authors: Yi Wang, Jingyang Zhou, Tianlong Chen, Sijia Liu, Shiyu Chang, Chandrajit Bajaj, Zhangyang Wang | Published: 2020-06-25 | Updated: 2020-11-27