敵対的学習

Stronger and Faster Wasserstein Adversarial Attacks

Authors: Kaiwen Wu, Allen Houze Wang, Yaoliang Yu | Published: 2020-08-06
ウォーターマーキング
敵対的学習
敵対的攻撃

Training DNN Model with Secret Key for Model Protection

Authors: MaungMaung AprilPyone, Hitoshi Kiya | Published: 2020-08-06
ウォーターマーキング
敵対的学習
機械学習

On the relationship between class selectivity, dimensionality, and robustness

Authors: Matthew L. Leavitt, Ari S. Morcos | Published: 2020-07-08 | Updated: 2020-10-13
ポイズニング
敵対的学習
脆弱性分析

How benign is benign overfitting?

Authors: Amartya Sanyal, Puneet K Dokania, Varun Kanade, Philip H. S. Torr | Published: 2020-07-08
敵対的サンプル
敵対的学習
過剰適合と記憶化

Defending against Backdoors in Federated Learning with Robust Learning Rate

Authors: Mustafa Safa Ozdayi, Murat Kantarcioglu, Yulia R. Gel | Published: 2020-07-07 | Updated: 2021-07-29
バックドア攻撃
敵対的学習
防御メカニズム

Backdoor attacks and defenses in feature-partitioned collaborative learning

Authors: Yang Liu, Zhihao Yi, Tianjian Chen | Published: 2020-07-07
ポイズニング
敵対的学習
防御メカニズム

Stochastic Linear Bandits Robust to Adversarial Attacks

Authors: Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett | Published: 2020-07-07 | Updated: 2020-10-27
不確実性の定量化
敵対的学習
計算効率

Robust Learning with Frequency Domain Regularization

Authors: Weiyu Guo, Yidong Ouyang | Published: 2020-07-07
敵対的学習
機械学習の基礎
計算効率

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Authors: Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem | Published: 2020-07-07 | Updated: 2020-07-18
攻撃パターン抽出
敵対的サンプル
敵対的学習

Black-box Adversarial Example Generation with Normalizing Flows

Authors: Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie | Published: 2020-07-06
敵対的学習
生成モデルの課題
計算効率