敵対的サンプル

Robustness, Privacy, and Generalization of Adversarial Training

Authors: Fengxiang He, Shaopeng Fu, Bohan Wang, Dacheng Tao | Published: 2020-12-25
ロバスト性とプライバシーの関係
敵対的サンプル
敵対的訓練

Gradient-Free Adversarial Attacks for Bayesian Neural Networks

Authors: Matthew Yuan, Matthew Wicker, Luca Laurenti | Published: 2020-12-23
攻撃の評価
敵対的サンプル
防御手法

FoggySight: A Scheme for Facial Lookup Privacy

Authors: Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno | Published: 2020-12-15
データプライバシー評価
敵対的サンプル
顔認識

Channel Effects on Surrogate Models of Adversarial Attacks against Wireless Signal Classifiers

Authors: Brian Kim, Yalin E. Sagduyu, Tugba Erpek, Kemal Davaslioglu, Sennur Ulukus | Published: 2020-12-03 | Updated: 2021-03-09
攻撃手法
敵対的サンプル
敵対的学習

Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack

Authors: Rui Shu, Tianpei Xia, Laurie Williams, Tim Menzies | Published: 2020-11-23 | Updated: 2021-10-12
モデル性能評価
敵対的サンプル
敵対的攻撃

Efficient and Transferable Adversarial Examples from Bayesian Neural Networks

Authors: Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen | Published: 2020-11-10 | Updated: 2022-06-18
モデル性能評価
敵対的サンプル
敵対的攻撃

Adversarial Examples in Constrained Domains

Authors: Ryan Sheatsley, Nicolas Papernot, Michael Weisman, Gunjan Verma, Patrick McDaniel | Published: 2020-11-02 | Updated: 2022-09-09
敵対的サンプル
敵対的攻撃
特徴エンジニアリング

Reliable Graph Neural Networks via Robust Aggregation

Authors: Simon Geisler, Daniel Zügner, Stephan Günnemann | Published: 2020-10-29
敵対的サンプル
証明書の比率
評価手法

Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?

Authors: Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, Stephan Günnemann | Published: 2020-10-28 | Updated: 2021-06-11
敵対的サンプル
生成モデルの課題
評価手法

Asymptotic Behavior of Adversarial Training in Binary Classification

Authors: Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis | Published: 2020-10-26 | Updated: 2021-07-14
攻撃の評価
敵対的サンプル
正則化