ポイズニング

Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version)

Authors: Gautam Raj Mode, Khaza Anuarul Hoque | Published: 2020-09-21 | Updated: 2020-09-28
ポイズニング
敵対的訓練
脆弱性管理

Adversarial Concept Drift Detection under Poisoning Attacks for Robust Data Stream Mining

Authors: Łukasz Korycki, Bartosz Krawczyk | Published: 2020-09-20
ドリフト検出手法
ポイズニング
敵対的攻撃検出

Data Poisoning Attacks on Regression Learning and Corresponding Defenses

Authors: Nicolas Michael Müller, Daniel Kowatsch, Konstantin Böttinger | Published: 2020-09-15
バックドア攻撃
ポイズニング
ロバスト回帰

Input Hessian Regularization of Neural Networks

Authors: Waleed Mustafa, Robert A. Vandermeulen, Marius Kloft | Published: 2020-09-14
ポイズニング
ロバスト回帰
敵対的訓練

A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses

Authors: Ambar Pal, René Vidal | Published: 2020-09-14 | Updated: 2020-11-11
ゲーム理論
ポイズニング
敵対的訓練

Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent

Authors: Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Tolga Tasdizen | Published: 2020-09-10 | Updated: 2023-04-20
ポイズニング
性能評価
敵対的攻撃手法

A black-box adversarial attack for poisoning clustering

Authors: Antonio Emanuele Cinà, Alessandro Torcinovich, Marcello Pelillo | Published: 2020-09-09 | Updated: 2021-11-10
バックドア攻撃
ポイズニング
毒性攻撃に特化した内容

Local and Central Differential Privacy for Robustness and Privacy in Federated Learning

Authors: Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro | Published: 2020-09-08 | Updated: 2022-05-27
バックドア攻撃
ポイズニング
メンバーシップ開示リスク

Detection Defense Against Adversarial Attacks with Saliency Map

Authors: Dengpan Ye, Chuanxi Chen, Changrui Liu, Hao Wang, Shunzhi Jiang | Published: 2020-09-06
ポイズニング
敵対的サンプル
敵対的攻撃手法

Improving Resistance to Adversarial Deformations by Regularizing Gradients

Authors: Pengfei Xia, Bin Li | Published: 2020-08-29 | Updated: 2020-10-06
ポイズニング
敵対的サンプル
敵対的攻撃