文献データベース

Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance

Authors: Cong Xie, Oluwasanmi Koyejo, Indranil Gupta | Published: 2018-05-25 | Updated: 2019-05-18
強化学習最適化
損失関数
線形モデル

Performing Co-Membership Attacks Against Deep Generative Models

Authors: Kin Sum Liu, Chaowei Xiao, Bo Li, Jie Gao | Published: 2018-05-24 | Updated: 2019-09-20
プライバシー手法
メンバーシップ推論
深層学習モデル

Cautious Deep Learning

Authors: Yotam Hechtlinger, Barnabás Póczos, Larry Wasserman | Published: 2018-05-24 | Updated: 2019-02-27
モデルの堅牢性
ラベル
確率分布

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

Authors: Fuxun Yu, Zirui Xu, Yanzhi Wang, Chenchen Liu, Xiang Chen | Published: 2018-05-23 | Updated: 2018-06-07
モデルの堅牢性
敵対的学習
敵対的攻撃検出

Phocas: dimensional Byzantine-resilient stochastic gradient descent

Authors: Cong Xie, Oluwasanmi Koyejo, Indranil Gupta | Published: 2018-05-23
ビザンチン攻撃対策
情報セキュリティ
損失関数

Approximate Newton-based statistical inference using only stochastic gradients

Authors: Tianyang Li, Anastasios Kyrillidis, Liu Liu, Constantine Caramanis | Published: 2018-05-23 | Updated: 2019-02-05
サンプリング手法
線形モデル
線形回帰

Adversarially Robust Training through Structured Gradient Regularization

Authors: Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, Thomas Hofmann | Published: 2018-05-22
モデルの堅牢性
損失関数
敵対的攻撃検出

Adversarial Attacks on Neural Networks for Graph Data

Authors: Daniel Zügner, Amir Akbarnejad, Stephan Günnemann | Published: 2018-05-21 | Updated: 2021-12-09
ポイズニング
モデルの頑健性保証
敵対的攻撃検出

Constructing Unrestricted Adversarial Examples with Generative Models

Authors: Yang Song, Rui Shu, Nate Kushman, Stefano Ermon | Published: 2018-05-21 | Updated: 2018-12-02
敵対的学習
敵対的攻撃検出
生成モデル

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

Authors: Ruying Bao, Sihang Liang, Qingcan Wang | Published: 2018-05-21 | Updated: 2018-09-29
モデルの頑健性保証
敵対的攻撃検出
透かし設計