Improving Adversarial Robustness via Unlabeled Out-of-Domain Data Authors: Zhun Deng, Linjun Zhang, Amirata Ghorbani, James Zou | Published: 2020-06-15 | Updated: 2021-02-21 半教師あり学習敵対的学習統計的手法 2020.06.15 2025.04.03 文献データベース
Weight Poisoning Attacks on Pre-trained Models Authors: Keita Kurita, Paul Michel, Graham Neubig | Published: 2020-04-14 バックドア攻撃ポイズニング敵対的学習 2020.04.14 2025.04.03 文献データベース
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions Authors: Jon Vadillo, Roberto Santana, Jose A. Lozano | Published: 2020-04-14 | Updated: 2023-01-25 ロバスト性評価敵対的サンプル敵対的学習 2020.04.14 2025.04.03 文献データベース
Blind Adversarial Training: Balance Accuracy and Robustness Authors: Haidong Xie, Xueshuang Xiang, Naijin Liu, Bin Dong | Published: 2020-04-10 ロバスト性敵対的学習適応型敵対的訓練 2020.04.10 2025.04.03 文献データベース
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies Authors: Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, Jiliang Tang | Published: 2020-03-02 | Updated: 2020-12-12 ポイズニング敵対的サンプル敵対的学習 2020.03.02 2025.04.03 文献データベース
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space Authors: Camilo Pestana, Naveed Akhtar, Wei Liu, David Glance, Ajmal Mian | Published: 2020-02-25 ロバスト性評価敵対的学習防御手法 2020.02.25 2025.04.03 文献データベース
Practical Fast Gradient Sign Attack against Mammographic Image Classifier Authors: Ibrahim Yilmaz | Published: 2020-01-27 敵対的学習敵対的攻撃検出機械学習手法 2020.01.27 2025.04.03 文献データベース
Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks Authors: Rehana Mahfuz, Rajeev Sahay, Aly El Gamal | Published: 2020-01-26 敵対的学習敵対的攻撃検出防御手法の効果分析 2020.01.26 2025.04.03 文献データベース
Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks Authors: Farnaz Behnia, Ali Mirzaeian, Mohammad Sabokrou, Sai Manoj, Tinoosh Mohsenin, Khaled N. Khasawneh, Liang Zhao, Houman Homayoun, Avesta Sasan | Published: 2020-01-16 敵対的サンプル敵対的学習計算複雑性 2020.01.16 2025.04.03 文献データベース
A simple way to make neural networks robust against diverse image corruptions Authors: Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel | Published: 2020-01-16 | Updated: 2020-07-22 ロバスト性分析収束性分析敵対的学習 2020.01.16 2025.04.03 文献データベース