文献データベース

From Predictions to Decisions: Using Lookahead Regularization

Authors: Nir Rosenfeld, Sophie Hilgard, Sai Srivatsa Ravindranath, David C. Parkes | Published: 2020-06-20 | Updated: 2020-06-23
アルゴリズム設計
不確実性推定
機械学習の応用

Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks

Authors: Lixin Fan, Kam Woh Ng, Ce Ju, Tianyu Zhang, Chang Liu, Chee Seng Chan, Qiang Yang | Published: 2020-06-20 | Updated: 2020-06-23
アルゴリズム設計
ポイズニング
機械学習のプライバシー保護

Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples

Authors: Josue Ortega Caro, Yilong Ju, Ryan Pyle, Sourav Dey, Wieland Brendel, Fabio Anselmi, Ankit Patel | Published: 2020-06-19 | Updated: 2023-03-08
敵対的サンプル
敵対的学習
透かし技術

Backdoor Attacks to Graph Neural Networks

Authors: Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong | Published: 2020-06-19 | Updated: 2021-12-17
バックドアモデルの検知
バックドア攻撃
防御手法

Systematic Attack Surface Reduction For Deployed Sentiment Analysis Models

Authors: Josh Kalin, David Noever, Gerry Dozier | Published: 2020-06-19
攻撃手法
敵対的学習
防御メカニズム

A general framework for defining and optimizing robustness

Authors: Alessandro Tibo, Manfred Jaeger, Kim G. Larsen | Published: 2020-06-19 | Updated: 2021-05-29
安全性特性
性能評価
敵対的学習

Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers

Authors: I. Fursov, A. Zaytsev, N. Kluchnikov, A. Kravchenko, E. Burnaev | Published: 2020-06-19
敵対的サンプル
敵対的学習
深層学習手法

Towards an Adversarially Robust Normalization Approach

Authors: Muhammad Awais, Fahad Shamshad, Sung-Ho Bae | Published: 2020-06-19
ハイパーパラメータ最適化
敵対的学習
敵対的攻撃

Adversarial Attacks for Multi-view Deep Models

Authors: Xuli Sun, Shiliang Sun | Published: 2020-06-19
攻撃手法
敵対的サンプル
敵対的攻撃

Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples

Authors: Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen | Published: 2020-06-18 | Updated: 2021-05-20
敵対的サンプル
敵対的攻撃
防御メカニズム