モデルの頑健性保証

Sequential Attacks on Agents for Long-Term Adversarial Goals

Authors: Edgar Tretschk, Seong Joon Oh, Mario Fritz | Published: 2018-05-31 | Updated: 2018-07-05
モデルの頑健性保証
強化学習
敵対的移転性

Adversarial Noise Attacks of Deep Learning Architectures — Stability Analysis via Sparse Modeled Signals

Authors: Yaniv Romano, Aviad Aberdam, Jeremias Sulam, Michael Elad | Published: 2018-05-29 | Updated: 2019-08-05
スパース性最適化
モデルの頑健性保証
透かし評価

Detecting Deceptive Reviews using Generative Adversarial Networks

Authors: Hojjat Aghakhani, Aravind Machiry, Shirin Nilizadeh, Christopher Kruegel, Giovanni Vigna | Published: 2018-05-25
バックドアモデルの検知
モデルの頑健性保証
欺瞞検出

Adversarial Attacks on Neural Networks for Graph Data

Authors: Daniel Zügner, Amir Akbarnejad, Stephan Günnemann | Published: 2018-05-21 | Updated: 2021-12-09
ポイズニング
モデルの頑健性保証
敵対的攻撃検出

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

Authors: Ruying Bao, Sihang Liang, Qingcan Wang | Published: 2018-05-21 | Updated: 2018-09-29
モデルの頑健性保証
敵対的攻撃検出
透かし設計

Targeted Adversarial Examples for Black Box Audio Systems

Authors: Rohan Taori, Amog Kamsetty, Brenton Chu, Nikita Vemuri | Published: 2018-05-20 | Updated: 2019-08-20
モデルの頑健性保証
敵対的攻撃検出
音声認識システム

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Authors: Pouya Samangouei, Maya Kabkab, Rama Chellappa | Published: 2018-05-17 | Updated: 2018-05-18
モデルの頑健性保証
情報セキュリティ
敵対的攻撃検出

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Authors: Jingyi Wang, Jun Sun, Peixin Zhang, Xinyu Wang | Published: 2018-05-14 | Updated: 2018-05-17
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

How Robust are Deep Neural Networks?

Authors: Biswa Sengupta, Karl J. Friston | Published: 2018-04-30
モデルの頑健性保証
深層学習に基づくIDS
透かし技術

Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers

Authors: Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach | Published: 2018-04-23 | Updated: 2020-10-03
クエリ生成手法
モデルの頑健性保証
敵対的攻撃手法