敵対的サンプル

Adversarial Attack and Defense on Graph Data: A Survey

Authors: Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Yixin Liu, Philip S. Yu, Lifang He, Bo Li | Published: 2018-12-26 | Updated: 2022-10-06
ポイズニング
ロバスト性
敵対的サンプル

Deep-RBF Networks Revisited: Robust Classification with Rejection

Authors: Pourya Habib Zadeh, Reshad Hosseini, Suvrit Sra | Published: 2018-12-07
モデルの頑健性保証
実験的検証
敵対的サンプル

Adversarial Attacks, Regression, and Numerical Stability Regularization

Authors: Andre T. Nguyen, Edward Raff | Published: 2018-12-07
ロバスト回帰
敵対的サンプル
防御効果分析

The Limitations of Model Uncertainty in Adversarial Settings

Authors: Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes | Published: 2018-12-06 | Updated: 2019-11-17
モデルの頑健性保証
ロバスト性評価
敵対的サンプル

On Configurable Defense against Adversarial Example Attacks

Authors: Bo Luo, Min Li, Yu Li, Qiang Xu | Published: 2018-12-06
敵対的サンプル
敵対的学習
防御手法

Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples

Authors: Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li | Published: 2018-12-05 | Updated: 2020-01-20
モデルの頑健性保証
敵対的サンプル
防御手法

Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification

Authors: Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock | Published: 2018-12-01 | Updated: 2019-04-04
テキスト分類の応用
敵対的サンプル
最適化問題

An Adversarial Approach for Explainable AI in Intrusion Detection Systems

Authors: Daniel L. Marino, Chathurika S. Wickramasinghe, Milos Manic | Published: 2018-11-28
AIによる出力の識別
モデル性能評価
敵対的サンプル

Active Deep Learning Attacks under Strict Rate Limitations for Online API Calls

Authors: Yi Shi, Yalin E. Sagduyu, Kemal Davaslioglu, Jason H. Li | Published: 2018-11-05
オンライン学習
メンバーシップ推論
敵対的サンプル

Excessive Invariance Causes Adversarial Vulnerability

Authors: Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge | Published: 2018-11-01 | Updated: 2020-07-12
モデルインバージョン
敵対的サンプル
敵対的訓練