Adversarial attacks hidden in plain sight Authors: Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer | Published: 2019-02-25 | Updated: 2020-04-26 バックドア攻撃ロバスト性評価敵対的学習 2019.02.25 2025.04.03 文献データベース
Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence Authors: Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani | Published: 2019-02-25 | Updated: 2020-08-17 バックドア攻撃強化学習攻撃敵対的学習 2019.02.25 2025.04.03 文献データベース
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks Authors: Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, Pengchuan Zhang | Published: 2019-02-23 | Updated: 2020-01-10 モデルの頑健性保証ロバスト性評価敵対的学習 2019.02.23 2025.04.03 文献データベース
Quantifying Perceptual Distortion of Adversarial Examples Authors: Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis | Published: 2019-02-21 モデルの頑健性保証敵対的学習敵対的攻撃手法 2019.02.21 2025.04.03 文献データベース
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch Authors: Gavin Weiguang Ding, Luyu Wang, Xiaomeng Jin | Published: 2019-02-20 ポイズニング敵対的学習研究方法論 2019.02.20 2025.04.03 文献データベース
A Little Is Enough: Circumventing Defenses For Distributed Learning Authors: Moran Baruch, Gilad Baruch, Yoav Goldberg | Published: 2019-02-16 敵対的学習敵対的攻撃敵対的攻撃手法 2019.02.16 2025.04.03 文献データベース
Model Compression with Adversarial Robustness: A Unified Optimization Framework Authors: Shupeng Gui, Haotao Wang, Chen Yu, Haichuan Yang, Zhangyang Wang, Ji Liu | Published: 2019-02-10 | Updated: 2019-12-28 敵対的学習敵対的攻撃最適化戦略 2019.02.10 2025.04.03 文献データベース
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks Authors: Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique | Published: 2019-02-04 | Updated: 2020-05-18 敵対的サンプル敵対的学習敵対的攻撃 2019.02.04 2025.04.03 文献データベース
A New Family of Neural Networks Provably Resistant to Adversarial Attacks Authors: Rakshit Agrawal, Luca de Alfaro, David Helmbold | Published: 2019-02-01 敵対的サンプル敵対的学習敵対的攻撃 2019.02.01 2025.04.03 文献データベース
Improving Adversarial Robustness via Promoting Ensemble Diversity Authors: Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu | Published: 2019-01-25 | Updated: 2019-05-29 モデルの頑健性保証敵対的学習深層学習手法 2019.01.25 2025.04.03 文献データベース