堅牢性向上手法

Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training

Authors: Alfred Laugros, Alice Caplier, Matthieu Ospici | Published: 2020-08-19
堅牢性向上手法
敵対的サンプル
敵対的サンプルの脆弱性

Provably robust deep generative models

Authors: Filipe Condessa, Zico Kolter | Published: 2020-04-22
堅牢性向上手法
敵対的攻撃
深層学習手法

Certifying Joint Adversarial Robustness for Model Ensembles

Authors: Mainuddin Ahmad Jonas, David Evans | Published: 2020-04-21
モデルアンサンブル
堅牢性向上手法
敵対的サンプル

Luring of transferable adversarial perturbations in the black-box paradigm

Authors: Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre | Published: 2020-04-10 | Updated: 2021-03-03
堅牢性向上手法
攻撃の評価
敵対的サンプル

Adversarial Robustness for Code

Authors: Pavol Bielik, Martin Vechev | Published: 2020-02-11 | Updated: 2020-08-15
ポイズニング
堅牢性向上手法
敵対的訓練

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

Authors: Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti | Published: 2020-02-11 | Updated: 2020-06-24
ロバスト性評価
堅牢性向上手法
敵対的攻撃

Improving the affordability of robustness training for DNNs

Authors: Sidharth Gupta, Parijat Dube, Ashish Verma | Published: 2020-02-11 | Updated: 2020-04-30
トレーニング手法
堅牢性向上手法
敵対的訓練

Fine-grained Uncertainty Modeling in Neural Networks

Authors: Rahul Soni, Naresh Shah, Jimmy D. Moore | Published: 2020-02-11
トレーニング手法
堅牢性向上手法
階層的不確実性モデル

Testing Robustness Against Unforeseen Adversaries

Authors: Max Kaufmann, Daniel Kang, Yi Sun, Steven Basart, Xuwang Yin, Mantas Mazeika, Akul Arora, Adam Dziedzic, Franziska Boenisch, Tom Brown, Jacob Steinhardt, Dan Hendrycks | Published: 2019-08-21 | Updated: 2023-10-30
堅牢性向上手法
将来の研究
敵対的攻撃手法

Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

Authors: Ka-Ho Chow, Wenqi Wei, Yanzhao Wu, Ling Liu | Published: 2019-08-21 | Updated: 2019-10-26
堅牢性向上手法
敵対的サンプル
敵対的攻撃手法