モデルの頑健性保証

Improving Adversarial Robustness via Promoting Ensemble Diversity

Authors: Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu | Published: 2019-01-25 | Updated: 2019-05-29
モデルの頑健性保証
敵対的学習
深層学習手法

Sitatapatra: Blocking the Transfer of Adversarial Samples

Authors: Ilia Shumailov, Xitong Gao, Yiren Zhao, Robert Mullins, Ross Anderson, Cheng-Zhong Xu | Published: 2019-01-23 | Updated: 2019-11-21
モデルの頑健性保証
敵対的サンプル
非転送性検出

A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples

Authors: Qiang Zeng, Jianhai Su, Chenglong Fu, Golam Kayas, Lannan Luo | Published: 2018-12-26 | Updated: 2019-12-03
モデルの頑健性保証
敵対的サンプルの検知
音声認識プロセス

Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks

Authors: Thomas Brunner, Frederik Diehl, Michael Truong Le, Alois Knoll | Published: 2018-12-24 | Updated: 2019-05-05
モデルの頑健性保証
ロバスト性
敵対的サンプルの検知

Designing Adversarially Resilient Classifiers using Resilient Feature Engineering

Authors: Kevin Eykholt, Atul Prakash | Published: 2018-12-17
マルチクラス分類
モデルの頑健性保証
ロバスト性

Trust Region Based Adversarial Attack on Neural Networks

Authors: Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael Mahoney | Published: 2018-12-16
モデルの頑健性保証
ロバスト性
敵対的学習

Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples

Authors: Emilio Rafael Balda, Arash Behboodi, Rudolf Mathar | Published: 2018-12-15
モデルの頑健性保証
ロバスト最適化
敵対的サンプルの検知

AutoGAN: Robust Classifier Against Adversarial Attacks

Authors: Blerta Lindqvist, Shridatt Sugrim, Rauf Izmailov | Published: 2018-12-08
モデルの頑健性保証
堅牢性向上手法
実験的検証

Deep-RBF Networks Revisited: Robust Classification with Rejection

Authors: Pourya Habib Zadeh, Reshad Hosseini, Suvrit Sra | Published: 2018-12-07
モデルの頑健性保証
実験的検証
敵対的サンプル

The Limitations of Model Uncertainty in Adversarial Settings

Authors: Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes | Published: 2018-12-06 | Updated: 2019-11-17
モデルの頑健性保証
ロバスト性評価
敵対的サンプル