モデルの頑健性保証

Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks

Authors: Thomas Brunner, Frederik Diehl, Michael Truong Le, Alois Knoll | Published: 2018-12-24 | Updated: 2019-05-05
モデルの頑健性保証
ロバスト性
敵対的サンプルの検知

Designing Adversarially Resilient Classifiers using Resilient Feature Engineering

Authors: Kevin Eykholt, Atul Prakash | Published: 2018-12-17
マルチクラス分類
モデルの頑健性保証
ロバスト性

Trust Region Based Adversarial Attack on Neural Networks

Authors: Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael Mahoney | Published: 2018-12-16
モデルの頑健性保証
ロバスト性
敵対的学習

Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples

Authors: Emilio Rafael Balda, Arash Behboodi, Rudolf Mathar | Published: 2018-12-15
モデルの頑健性保証
ロバスト最適化
敵対的サンプルの検知

AutoGAN: Robust Classifier Against Adversarial Attacks

Authors: Blerta Lindqvist, Shridatt Sugrim, Rauf Izmailov | Published: 2018-12-08
モデルの頑健性保証
堅牢性向上手法
実験的検証

Deep-RBF Networks Revisited: Robust Classification with Rejection

Authors: Pourya Habib Zadeh, Reshad Hosseini, Suvrit Sra | Published: 2018-12-07
モデルの頑健性保証
実験的検証
敵対的サンプル

The Limitations of Model Uncertainty in Adversarial Settings

Authors: Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes | Published: 2018-12-06 | Updated: 2019-11-17
モデルの頑健性保証
ロバスト性評価
敵対的サンプル

Regularized Ensembles and Transferability in Adversarial Learning

Authors: Yifan Chen, Yevgeniy Vorobeychik | Published: 2018-12-05
モデルの頑健性保証
一般化性能
知識移転性

Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples

Authors: Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li | Published: 2018-12-05 | Updated: 2020-01-20
モデルの頑健性保証
敵対的サンプル
防御手法

FineFool: Fine Object Contour Attack via Attention

Authors: Jinyin Chen, Haibin Zheng, Hui Xiong, Mengmeng Su | Published: 2018-12-01
モデルの頑健性保証
効果的な摂動手法
重み更新手法