モデルの頑健性保証

Adversarial Examples: Opportunities and Challenges

Authors: Jiliang Zhang, Chen Li | Published: 2018-09-13 | Updated: 2019-09-23
モデルの頑健性保証
敵対的サンプル
防御手法

Deep Learning in Information Security

Authors: Stefan Thaler, Vlado Menkovski, Milan Petkovic | Published: 2018-09-12
モデルアーキテクチャ
モデルの頑健性保証
特徴抽出手法

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Authors: Saeed Mahloujifar, Dimitrios I. Diochnos, Mohammad Mahmoody | Published: 2018-09-09 | Updated: 2018-11-06
モデルの頑健性保証
ロバスト性分析
敵対的移転性

Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples

Authors: Dan Peng, Zizhan Zheng, Xiaofeng Zhang | Published: 2018-09-08 | Updated: 2018-12-22
モデルの頑健性保証
敵対的サンプルの検知
敵対的移転性

Detecting Potential Local Adversarial Examples for Human-Interpretable Defense

Authors: Xavier Renard, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki | Published: 2018-09-07
モデルの頑健性保証
敵対的移転性
解釈可能性の損失

Adversarial Reprogramming of Text Classification Neural Networks

Authors: Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, Farinaz Koushanfar | Published: 2018-09-06 | Updated: 2019-08-15
タスク適応手法
モデルの頑健性保証
敵対的移転性

Bridging machine learning and cryptography in defence against adversarial attacks

Authors: Olga Taran, Shideh Rezaeifar, Slava Voloshynovskiy | Published: 2018-09-05
モデルの頑健性保証
モデル抽出攻撃の検知
ロバスト性分析

Adversarial Attacks on Node Embeddings via Graph Poisoning

Authors: Aleksandar Bojchevski, Stephan Günnemann | Published: 2018-09-04 | Updated: 2019-05-27
ポイズニング
モデルの頑健性保証
ロバスト性分析

Lipschitz regularized Deep Neural Networks generalize and are adversarially robust

Authors: Chris Finlay, Jeff Calder, Bilal Abbasi, Adam Oberman | Published: 2018-08-28 | Updated: 2019-09-12
モデルの頑健性保証
ロバスト性分析
敵対的学習

Mitigation of Adversarial Attacks through Embedded Feature Selection

Authors: Ziyi Bao, Luis Muñoz-González, Emil C. Lupu | Published: 2018-08-16
モデルの頑健性保証
ロバスト性分析
敵対的攻撃