ロバスト性分析

Are adversarial examples inevitable?

Authors: Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein | Published: 2018-09-06 | Updated: 2020-02-03
ロバスト性分析
敵対的サンプル
敵対的サンプルの検知

Bridging machine learning and cryptography in defence against adversarial attacks

Authors: Olga Taran, Shideh Rezaeifar, Slava Voloshynovskiy | Published: 2018-09-05
モデルの頑健性保証
モデル抽出攻撃の検知
ロバスト性分析

Adversarial Attacks on Node Embeddings via Graph Poisoning

Authors: Aleksandar Bojchevski, Stephan Günnemann | Published: 2018-09-04 | Updated: 2019-05-27
ポイズニング
モデルの頑健性保証
ロバスト性分析

Adversarial Attack Type I: Cheat Classifiers by Significant Changes

Authors: Sanli Tang, Xiaolin Huang, Mingjian Chen, Chengjin Sun, Jie Yang | Published: 2018-09-03 | Updated: 2019-07-22
トリガーの検知
ロバスト性分析
敵対的移転性

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Authors: Cong Liao, Haoti Zhong, Anna Squicciarini, Sencun Zhu, David Miller | Published: 2018-08-30
バックドア攻撃
バックドア攻撃対策
ロバスト性分析

Lipschitz regularized Deep Neural Networks generalize and are adversarially robust

Authors: Chris Finlay, Jeff Calder, Bilal Abbasi, Adam Oberman | Published: 2018-08-28 | Updated: 2019-09-12
モデルの頑健性保証
ロバスト性分析
敵対的学習

Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples

Authors: George A. Adam, Petr Smirnov, David Duvenaud, Benjamin Haibe-Kains, Anna Goldenberg | Published: 2018-08-20 | Updated: 2018-09-08
ロバスト性分析
敵対的攻撃
確率分布

Mitigation of Adversarial Attacks through Embedded Feature Selection

Authors: Ziyi Bao, Luis Muñoz-González, Emil C. Lupu | Published: 2018-08-16
モデルの頑健性保証
ロバスト性分析
敵対的攻撃

Mitigating Sybils in Federated Learning Poisoning

Authors: Clement Fung, Chris J. M. Yoon, Ivan Beschastnikh | Published: 2018-08-14 | Updated: 2020-07-15
ポイズニング
ロバスト性分析
敵対的攻撃

Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks

Authors: Fan Yang, Zhiyuan Chen | Published: 2018-08-10
モデルの頑健性保証
ロバスト性分析
敵対的攻撃