敵対的サンプル

Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model

Authors: Payam Delgosha, Hamed Hassani, Ramtin Pedarsani | Published: 2021-04-05
収束解析
敵対的サンプル
最適化問題

SGBA: A Stealthy Scapegoat Backdoor Attack against Deep Neural Networks

Authors: Ying He, Zhili Shen, Chang Xia, Jingyu Hua, Wei Tong, Sheng Zhong | Published: 2021-04-02 | Updated: 2022-05-16
バックドア攻撃手法
ポイズニング攻撃
敵対的サンプル

Smoothness Analysis of Adversarial Training

Authors: Sekitoshi Kanai, Masanori Yamada, Hiroshi Takahashi, Yuki Yamanaka, Yasutoshi Ida | Published: 2021-03-02 | Updated: 2023-03-06
データ依存性
敵対的サンプル
敵対的スペクトル攻撃検出

Adversarial Information Bottleneck

Authors: Penglong Zhai, Shihua Zhang | Published: 2021-02-28 | Updated: 2021-03-03
モデル性能評価
敵対的サンプル
敵対的訓練

Adversarial Robustness with Non-uniform Perturbations

Authors: Ecenaz Erdemir, Jeffrey Bickford, Luca Melis, Sergul Aydore | Published: 2021-02-24 | Updated: 2021-10-29
マルウェア検出手法
敵対的サンプル
敵対的サンプルの検知

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

Authors: Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi | Published: 2021-02-22 | Updated: 2022-05-05
ベイズ分類
ポイズニング
敵対的サンプル

Bridging the Gap Between Adversarial Robustness and Optimization Bias

Authors: Fartash Faghri, Sven Gowal, Cristina Vasconcelos, David J. Fleet, Fabian Pedregosa, Nicolas Le Roux | Published: 2021-02-17 | Updated: 2021-06-07
モデルアーキテクチャ
敵対的サンプル
敵対的訓練

Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons

Authors: Bohang Zhang, Tianle Cai, Zhou Lu, Di He, Liwei Wang | Published: 2021-02-10 | Updated: 2021-06-14
データセット評価
モデル性能評価
敵対的サンプル

Generating Black-Box Adversarial Examples in Sparse Domain

Authors: Hadi Zanddizari, Behnam Zeinali, J. Morris Chang | Published: 2021-01-22 | Updated: 2021-10-15
性能評価
敵対的サンプル
敵対的攻撃

With False Friends Like These, Who Can Notice Mistakes?

Authors: Lue Tao, Lei Feng, Jinfeng Yi, Songcan Chen | Published: 2020-12-29 | Updated: 2021-12-13
敵対的サンプル
敵対的学習
防御メカニズム