敵対的攻撃検出

Adversarial Attacks on Deep Neural Networks for Time Series Classification

Authors: Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller | Published: 2019-03-17 | Updated: 2019-04-26
敵対的サンプル
敵対的学習
敵対的攻撃検出

Defending Against Adversarial Attacks by Leveraging an Entire GAN

Authors: Gokula Krishnan Santhanam, Paulina Grnarova | Published: 2018-05-27
トリガーの検知
モデルの堅牢性
敵対的攻撃検出

Unsupervised Learning for Trustworthy IoT

Authors: Nikhil Banerjee, Thanassis Giannetsos, Emmanouil Panaousis, Clive Cheong Took | Published: 2018-05-25
データ駆動型クラスタリング
ユーザー行動分析
敵対的攻撃検出

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

Authors: Fuxun Yu, Zirui Xu, Yanzhi Wang, Chenchen Liu, Xiang Chen | Published: 2018-05-23 | Updated: 2018-06-07
モデルの堅牢性
敵対的学習
敵対的攻撃検出

Adversarially Robust Training through Structured Gradient Regularization

Authors: Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, Thomas Hofmann | Published: 2018-05-22
モデルの堅牢性
損失関数
敵対的攻撃検出

Adversarial Attacks on Neural Networks for Graph Data

Authors: Daniel Zügner, Amir Akbarnejad, Stephan Günnemann | Published: 2018-05-21 | Updated: 2021-12-09
ポイズニング
モデルの頑健性保証
敵対的攻撃検出

Constructing Unrestricted Adversarial Examples with Generative Models

Authors: Yang Song, Rui Shu, Nate Kushman, Stefano Ermon | Published: 2018-05-21 | Updated: 2018-12-02
敵対的学習
敵対的攻撃検出
生成モデル

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

Authors: Ruying Bao, Sihang Liang, Qingcan Wang | Published: 2018-05-21 | Updated: 2018-09-29
モデルの頑健性保証
敵対的攻撃検出
透かし設計

Targeted Adversarial Examples for Black Box Audio Systems

Authors: Rohan Taori, Amog Kamsetty, Brenton Chu, Nikita Vemuri | Published: 2018-05-20 | Updated: 2019-08-20
モデルの頑健性保証
敵対的攻撃検出
音声認識システム

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Authors: Pouya Samangouei, Maya Kabkab, Rama Chellappa | Published: 2018-05-17 | Updated: 2018-05-18
モデルの頑健性保証
情報セキュリティ
敵対的攻撃検出