モデルの頑健性保証

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

Authors: Xuanqing Liu, Yao Li, Chongruo Wu, Cho-Jui Hsieh | Published: 2018-10-01 | Updated: 2019-05-04
モデルの頑健性保証
ロバスト性向上手法
敵対的学習

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

Authors: Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu | Published: 2018-09-30 | Updated: 2019-11-23
モデルの頑健性保証
ロバスト性向上手法
敵対的攻撃手法

Adversarial Attacks on Cognitive Self-Organizing Networks: The Challenge and the Way Forward

Authors: Muhammad Usama, Junaid Qadir, Ala Al-Fuqaha | Published: 2018-09-26
モデルの頑健性保証
敵対的学習
敵対的攻撃手法

Neural Networks with Structural Resistance to Adversarial Attacks

Authors: Luca de Alfaro | Published: 2018-09-25
ポイズニング
モデルの頑健性保証
ロバスト性向上手法

Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization

Authors: Bao Wang, Alex T. Lin, Wei Zhu, Penghang Yin, Andrea L. Bertozzi, Stanley J. Osher | Published: 2018-09-23 | Updated: 2020-04-29
モデルの頑健性保証
敵対的学習
敵対的攻撃手法

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Authors: Siyue Wang, Xiao Wang, Pu Zhao, Wujie Wen, David Kaeli, Peter Chin, Xue Lin | Published: 2018-09-13
モデルの頑健性保証
ロバスト性向上
敵対的サンプル

Query-Efficient Black-Box Attack by Active Learning

Authors: Pengcheng Li, Jinfeng Yi, Lijun Zhang | Published: 2018-09-13
クエリ生成手法
モデルの頑健性保証
敵対的攻撃

Adversarial Examples: Opportunities and Challenges

Authors: Jiliang Zhang, Chen Li | Published: 2018-09-13 | Updated: 2019-09-23
モデルの頑健性保証
敵対的サンプル
防御手法

Deep Learning in Information Security

Authors: Stefan Thaler, Vlado Menkovski, Milan Petkovic | Published: 2018-09-12
モデルアーキテクチャ
モデルの頑健性保証
特徴抽出手法

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Authors: Saeed Mahloujifar, Dimitrios I. Diochnos, Mohammad Mahmoody | Published: 2018-09-09 | Updated: 2018-11-06
モデルの頑健性保証
ロバスト性分析
敵対的移転性