敵対的攻撃

Orthogonal Deep Models As Defense Against Black-Box Attacks

Authors: Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian | Published: 2020-06-26
ポイズニング
敵対的サンプル
敵対的攻撃

Proper Network Interpretability Helps Adversarial Robustness in Classification

Authors: Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel | Published: 2020-06-26 | Updated: 2020-10-21
敵対的サンプル
敵対的攻撃
解釈手法

Can 3D Adversarial Logos Cloak Humans?

Authors: Yi Wang, Jingyang Zhou, Tianlong Chen, Sijia Liu, Shiyu Chang, Chandrajit Bajaj, Zhangyang Wang | Published: 2020-06-25 | Updated: 2020-11-27
ロゴ変換手法
敵対的攻撃
生成モデル

Network Moments: Extensions and Sparse-Smooth Attacks

Authors: Modar Alfadly, Adel Bibi, Emilio Botero, Salman Alsubaihi, Bernard Ghanem | Published: 2020-06-21
敵対的攻撃
深層学習手法
統計的手法

Towards an Adversarially Robust Normalization Approach

Authors: Muhammad Awais, Fahad Shamshad, Sung-Ho Bae | Published: 2020-06-19
ハイパーパラメータ最適化
敵対的学習
敵対的攻撃

Adversarial Attacks for Multi-view Deep Models

Authors: Xuli Sun, Shiliang Sun | Published: 2020-06-19
攻撃手法
敵対的サンプル
敵対的攻撃

Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples

Authors: Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen | Published: 2020-06-18 | Updated: 2021-05-20
敵対的サンプル
敵対的攻撃
防御メカニズム

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Authors: Xiang Zhang, Marinka Zitnik | Published: 2020-06-15 | Updated: 2020-10-28
グラフニューラルネットワーク
敵対的攻撃
毒性攻撃に特化した内容

Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks

Authors: Sarada Krithivasan, Sanchari Sen, Anand Raghunathan | Published: 2020-06-14 | Updated: 2020-09-14
スパース性最適化
敵対的サンプル
敵対的攻撃

Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models

Authors: Mitch Hill, Jonathan Mitchell, Song-Chun Zhu | Published: 2020-05-27 | Updated: 2021-03-18
敵対的サンプル
敵対的攻撃
機械学習技術