文献データベース

Adversarial Robustness via Label-Smoothing

Authors: Morgane Goibert, Elvis Dohmatob | Published: 2019-06-27 | Updated: 2019-10-15
敵対的サンプル
敵対的攻撃
深層学習手法

Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks

Authors: Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz | Published: 2019-06-26 | Updated: 2020-03-03
モデルの頑健性保証
モデル抽出攻撃の検知
攻撃の評価

The Cost of a Reductions Approach to Private Fair Optimization

Authors: Daniel Alabi | Published: 2019-06-23 | Updated: 2021-05-23
アルゴリズム設計
プライバシー保護
最適化戦略

Adversarial Examples to Fool Iris Recognition Systems

Authors: Sobhan Soleymani, Ali Dabouei, Jeremy Dawson, Nasser M. Nasrabadi | Published: 2019-06-21 | Updated: 2019-07-18
敵対的サンプル
敵対的攻撃
深層学習手法

Deep Leakage from Gradients

Authors: Ligeng Zhu, Zhijian Liu, Song Han | Published: 2019-06-21 | Updated: 2019-12-19
プライバシー保護
敵対的攻撃
防御的欺瞞

Scalable and Differentially Private Distributed Aggregation in the Shuffled Model

Authors: Badih Ghazi, Rasmus Pagh, Ameya Velingker | Published: 2019-06-19 | Updated: 2019-12-02
データ抽出と分析
プライバシー保護
連合学習

Explanations can be manipulated and geometry is to blame

Authors: Ann-Kathrin Dombrowski, Maximilian Alber, Christopher J. Anders, Marcel Ackermann, Klaus-Robert Müller, Pan Kessel | Published: 2019-06-19 | Updated: 2019-09-25
モデルの解釈性
ロバスト性に関する評価
説明可能性に対する攻撃

Convergence of Adversarial Training in Overparametrized Neural Networks

Authors: Ruiqi Gao, Tianle Cai, Haochuan Li, Liwei Wang, Cho-Jui Hsieh, Jason D. Lee | Published: 2019-06-19 | Updated: 2019-11-09
ロバスト性の要件
敵対的サンプル
深層学習手法

Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation

Authors: Han Zhao, Jianfeng Chi, Yuan Tian, Geoffrey J. Gordon | Published: 2019-06-19 | Updated: 2020-10-25
プライバシー保護
メンバーシップ推論
最適化問題

Poisoning Attacks with Generative Adversarial Nets

Authors: Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu | Published: 2019-06-18 | Updated: 2019-09-25
バックドア攻撃
攻撃手法
生成的敵対ネットワーク