モデルの頑健性保証

Adversarial Attacks on Neural Networks for Graph Data

Authors: Daniel Zügner, Amir Akbarnejad, Stephan Günnemann | Published: 2018-05-21 | Updated: 2021-12-09
ポイズニング
モデルの頑健性保証
敵対的攻撃検出

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

Authors: Ruying Bao, Sihang Liang, Qingcan Wang | Published: 2018-05-21 | Updated: 2018-09-29
モデルの頑健性保証
敵対的攻撃検出
透かし設計

Targeted Adversarial Examples for Black Box Audio Systems

Authors: Rohan Taori, Amog Kamsetty, Brenton Chu, Nikita Vemuri | Published: 2018-05-20 | Updated: 2019-08-20
モデルの頑健性保証
敵対的攻撃検出
音声認識システム

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Authors: Pouya Samangouei, Maya Kabkab, Rama Chellappa | Published: 2018-05-17 | Updated: 2018-05-18
モデルの頑健性保証
情報セキュリティ
敵対的攻撃検出

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Authors: Jingyi Wang, Jun Sun, Peixin Zhang, Xinyu Wang | Published: 2018-05-14 | Updated: 2018-05-17
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

How Robust are Deep Neural Networks?

Authors: Biswa Sengupta, Karl J. Friston | Published: 2018-04-30
モデルの頑健性保証
深層学習に基づくIDS
透かし技術

Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers

Authors: Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach | Published: 2018-04-23 | Updated: 2020-10-03
クエリ生成手法
モデルの頑健性保証
敵対的攻撃手法

ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Authors: Shang-Tse Chen, Cory Cornelius, Jason Martin, Duen Horng Chau | Published: 2018-04-16 | Updated: 2019-05-01
Faster R-CNN
モデルの頑健性保証
敵対的攻撃手法

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

Authors: Anish Athalye, Nicholas Carlini | Published: 2018-04-10
モデルの頑健性保証
敵対的攻撃
透かし

Adversarial Training Versus Weight Decay

Authors: Angus Galloway, Thomas Tanay, Graham W. Taylor | Published: 2018-04-10 | Updated: 2018-07-23
モデルの頑健性保証
敵対的学習
敵対的攻撃