Adversarial attack

ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining

Authors: Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha | Published: 2020-06-26 | Updated: 2021-06-30
Out-of-Distribution Detection
Adversarial Example Detection
Adversarial attack

Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?

Authors: Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, Ting Liu | Published: 2020-06-26 | Updated: 2022-07-28
Backdoor Attack
Adversarial Example Detection
Adversarial attack

Orthogonal Deep Models As Defense Against Black-Box Attacks

Authors: Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian | Published: 2020-06-26
Poisoning
Adversarial Example
Adversarial attack

Proper Network Interpretability Helps Adversarial Robustness in Classification

Authors: Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel | Published: 2020-06-26 | Updated: 2020-10-21
Adversarial Example
Adversarial attack
Interpretation Method

Can 3D Adversarial Logos Cloak Humans?

Authors: Yi Wang, Jingyang Zhou, Tianlong Chen, Sijia Liu, Shiyu Chang, Chandrajit Bajaj, Zhangyang Wang | Published: 2020-06-25 | Updated: 2020-11-27
Logo Transformation Method
Adversarial attack
Generative Model

Network Moments: Extensions and Sparse-Smooth Attacks

Authors: Modar Alfadly, Adel Bibi, Emilio Botero, Salman Alsubaihi, Bernard Ghanem | Published: 2020-06-21
Adversarial attack
Deep Learning Method
Statistical Methods

Towards an Adversarially Robust Normalization Approach

Authors: Muhammad Awais, Fahad Shamshad, Sung-Ho Bae | Published: 2020-06-19
Hyperparameter Optimization
Adversarial Learning
Adversarial attack

Adversarial Attacks for Multi-view Deep Models

Authors: Xuli Sun, Shiliang Sun | Published: 2020-06-19
Attack Method
Adversarial Example
Adversarial attack

Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples

Authors: Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen | Published: 2020-06-18 | Updated: 2021-05-20
Adversarial Example
Adversarial attack
Defense Mechanism

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Authors: Xiang Zhang, Marinka Zitnik | Published: 2020-06-15 | Updated: 2020-10-28
Graph Neural Network
Adversarial attack
Content Specialized for Toxicity Attacks