ポイズニング

CEB Improves Model Robustness

Authors: Ian Fischer, Alexander A. Alemi | Published: 2020-02-13
ポイズニング
モデル選択手法
ロバスト性評価

Adversarial Robustness for Code

Authors: Pavol Bielik, Martin Vechev | Published: 2020-02-11 | Updated: 2020-08-15
ポイズニング
堅牢性向上手法
敵対的訓練

Adversarial Data Encryption

Authors: Yingdong Hu, Liang Zhang, Wei Shan, Xiaoxiao Qin, Jing Qi, Zhenzhou Wu, Yang Yuan | Published: 2020-02-10 | Updated: 2020-02-11
ポイズニング
敵対的攻撃
暗号技術

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Authors: Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter | Published: 2020-02-07 | Updated: 2020-08-11
ポイズニング
ロバスト性向上手法
連続的な線形関数

Assessing the Adversarial Robustness of Monte Carlo and Distillation Methods for Deep Bayesian Neural Network Classification

Authors: Meet P. Vadera, Satya Narayan Shukla, Brian Jalaian, Benjamin M. Marlin | Published: 2020-02-07
ベイズ分類
ポイズニング
敵対的サンプル

Can’t Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

Authors: Moshe Kravchik, Asaf Shabtai | Published: 2020-02-07
ポイズニング
ロバスト性向上手法
毒性攻撃に特化した内容

Learning to Detect Malicious Clients for Robust Federated Learning

Authors: Suyi Li, Yong Cheng, Wei Wang, Yang Liu, Tianjian Chen | Published: 2020-02-01
ポイズニング
悪意のあるノード検出
連合学習システム

Adversarial Attack on Community Detection by Hiding Individuals

Authors: Jia Li, Honglei Zhang, Zhichao Han, Yu Rong, Hong Cheng, Junzhou Huang | Published: 2020-01-22
コミュニティ検出
ポイズニング
敵対的攻撃検出

Advbox: a toolbox to generate adversarial examples that fool neural networks

Authors: Dou Goodman, Hao Xin, Wang Yang, Wu Yuesheng, Xiong Junfeng, Zhang Huan | Published: 2020-01-13 | Updated: 2020-08-26
ポイズニング
敵対的サンプル
敵対的攻撃手法

On the Resilience of Biometric Authentication Systems against Random Inputs

Authors: Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Mohamed Ali Kaafar | Published: 2020-01-13 | Updated: 2020-01-24
ポイズニング
敵対的攻撃
機械学習