敵対的学習

Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

Authors: Ke Sun, Zhanxing Zhu, Zhouchen Lin | Published: 2019-02-28
モデルの頑健性保証
堅牢性向上手法
敵対的学習

Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors

Authors: Ke Sun, Zhanxing Zhu, Zhouchen Lin | Published: 2019-02-28
ロバスト性評価
敵対的サンプルの検知
敵対的学習

Adversarial Attacks on Time Series

Authors: Fazle Karim, Somshubra Majumdar, Houshang Darabi | Published: 2019-02-27 | Updated: 2019-03-01
モデル抽出攻撃
敵対的サンプル
敵対的学習

The Best Defense Is a Good Offense: Adversarial Attacks to Avoid Modulation Detection

Authors: Muhammad Zaid Hameed, Andras Gyorgy, Deniz Gunduz | Published: 2019-02-27 | Updated: 2020-04-07
敵対的サンプル
敵対的学習
無線チャネル検出

Design of intentional backdoors in sequential models

Authors: Zhaoyuan Yang, Naresh Iyer, Johan Reimann, Nurali Virani | Published: 2019-02-26
バックドア攻撃
強化学習攻撃
敵対的学習

Adversarial attacks hidden in plain sight

Authors: Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer | Published: 2019-02-25 | Updated: 2020-04-26
バックドア攻撃
ロバスト性評価
敵対的学習

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence

Authors: Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani | Published: 2019-02-25 | Updated: 2020-08-17
バックドア攻撃
強化学習攻撃
敵対的学習

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Authors: Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, Pengchuan Zhang | Published: 2019-02-23 | Updated: 2020-01-10
モデルの頑健性保証
ロバスト性評価
敵対的学習

Quantifying Perceptual Distortion of Adversarial Examples

Authors: Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis | Published: 2019-02-21
モデルの頑健性保証
敵対的学習
敵対的攻撃手法

advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

Authors: Gavin Weiguang Ding, Luyu Wang, Xiaomeng Jin | Published: 2019-02-20
ポイズニング
敵対的学習
研究方法論