敵対的サンプル

Sitatapatra: Blocking the Transfer of Adversarial Samples

Authors: Ilia Shumailov, Xitong Gao, Yiren Zhao, Robert Mullins, Ross Anderson, Cheng-Zhong Xu | Published: 2019-01-23 | Updated: 2019-11-21
モデルの頑健性保証
敵対的サンプル
非転送性検出

Universal Rules for Fooling Deep Neural Networks based Text Classification

Authors: Di Li, Danilo Vasconcellos Vargas, Sakurai Kouichi | Published: 2019-01-22 | Updated: 2019-04-03
トリガーの検知
敵対的サンプル
深層学習手法

Adversarial Attack and Defense on Graph Data: A Survey

Authors: Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Yixin Liu, Philip S. Yu, Lifang He, Bo Li | Published: 2018-12-26 | Updated: 2022-10-06
ポイズニング
ロバスト性
敵対的サンプル

Deep-RBF Networks Revisited: Robust Classification with Rejection

Authors: Pourya Habib Zadeh, Reshad Hosseini, Suvrit Sra | Published: 2018-12-07
モデルの頑健性保証
実験的検証
敵対的サンプル

Adversarial Attacks, Regression, and Numerical Stability Regularization

Authors: Andre T. Nguyen, Edward Raff | Published: 2018-12-07
ロバスト回帰
敵対的サンプル
防御効果分析

The Limitations of Model Uncertainty in Adversarial Settings

Authors: Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes | Published: 2018-12-06 | Updated: 2019-11-17
モデルの頑健性保証
ロバスト性評価
敵対的サンプル

On Configurable Defense against Adversarial Example Attacks

Authors: Bo Luo, Min Li, Yu Li, Qiang Xu | Published: 2018-12-06
敵対的サンプル
敵対的学習
防御手法

Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples

Authors: Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li | Published: 2018-12-05 | Updated: 2020-01-20
モデルの頑健性保証
敵対的サンプル
防御手法

Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification

Authors: Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock | Published: 2018-12-01 | Updated: 2019-04-04
テキスト分類の応用
敵対的サンプル
最適化問題

An Adversarial Approach for Explainable AI in Intrusion Detection Systems

Authors: Daniel L. Marino, Chathurika S. Wickramasinghe, Milos Manic | Published: 2018-11-28
AIによる出力の識別
モデル性能評価
敵対的サンプル