敵対的学習

A Little Is Enough: Circumventing Defenses For Distributed Learning

Authors: Moran Baruch, Gilad Baruch, Yoav Goldberg | Published: 2019-02-16
敵対的学習
敵対的攻撃
敵対的攻撃手法

Model Compression with Adversarial Robustness: A Unified Optimization Framework

Authors: Shupeng Gui, Haotao Wang, Chen Yu, Haichuan Yang, Zhangyang Wang, Ji Liu | Published: 2019-02-10 | Updated: 2019-12-28
敵対的学習
敵対的攻撃
最適化戦略

Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

Authors: Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique | Published: 2019-02-04 | Updated: 2020-05-18
敵対的サンプル
敵対的学習
敵対的攻撃

A New Family of Neural Networks Provably Resistant to Adversarial Attacks

Authors: Rakshit Agrawal, Luca de Alfaro, David Helmbold | Published: 2019-02-01
敵対的サンプル
敵対的学習
敵対的攻撃

Improving Adversarial Robustness via Promoting Ensemble Diversity

Authors: Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu | Published: 2019-01-25 | Updated: 2019-05-29
モデルの頑健性保証
敵対的学習
深層学習手法

PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

Authors: Mehdi Jafarnia-Jahromi, Tasmin Chowdhury, Hsin-Tai Wu, Sayandev Mukherjee | Published: 2018-12-25 | Updated: 2020-01-04
ロバスト性
敵対的サンプルの検知
敵対的学習

Trust Region Based Adversarial Attack on Neural Networks

Authors: Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael Mahoney | Published: 2018-12-16
モデルの頑健性保証
ロバスト性
敵対的学習

Prior Networks for Detection of Adversarial Attacks

Authors: Andrey Malinin, Mark Gales | Published: 2018-12-06
モデル抽出攻撃の検知
ロバスト性評価
敵対的学習

On Configurable Defense against Adversarial Example Attacks

Authors: Bo Luo, Min Li, Yu Li, Qiang Xu | Published: 2018-12-06
敵対的サンプル
敵対的学習
防御手法

Model-Reuse Attacks on Deep Learning Systems

Authors: Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang | Published: 2018-12-02
モデル抽出攻撃
モデル抽出攻撃の検知
敵対的学習