ポイズニング

How Does Mixup Help With Robustness and Generalization?

Authors: Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou | Published: 2020-10-09 | Updated: 2021-03-17
ポイズニング
ロバスト性評価
一般化性能

Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples

Authors: Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, Pushmeet Kohli | Published: 2020-10-07 | Updated: 2021-03-30
ポイズニング
ロバスト性向上手法
敵対的攻撃

Understanding Catastrophic Overfitting in Single-step Adversarial Training

Authors: Hoki Kim, Woojin Lee, Jaewook Lee | Published: 2020-10-05 | Updated: 2020-12-15
ポイズニング
ロバスト性に関する評価
敵対的学習

Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks

Authors: Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Andreas Spanias | Published: 2020-09-30
GNN
ポイズニング
毒性攻撃に特化した内容

Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients

Authors: Yifei Huang, Yaodong Yu, Hongyang Zhang, Yi Ma, Yuan Yao | Published: 2020-09-28 | Updated: 2021-06-02
ポイズニング
ロバスト性とプライバシーの関係
深層学習

A Robust graph attention network with dynamic adjusted Graph

Authors: Xianchen Zhou, Yaoyun Zeng, Hongxia Wang | Published: 2020-09-28 | Updated: 2022-08-04
グラフ変換
ポイズニング
ロバスト性とプライバシーの関係

Semantics-Preserving Adversarial Training

Authors: Wonseok Lee, Hanbit Lee, Sang-goo Lee | Published: 2020-09-23
ポイズニング
ロバスト性
生成モデル

Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version)

Authors: Gautam Raj Mode, Khaza Anuarul Hoque | Published: 2020-09-21 | Updated: 2020-09-28
ポイズニング
敵対的訓練
脆弱性管理

Adversarial Concept Drift Detection under Poisoning Attacks for Robust Data Stream Mining

Authors: Łukasz Korycki, Bartosz Krawczyk | Published: 2020-09-20
ドリフト検出手法
ポイズニング
敵対的攻撃検出

Data Poisoning Attacks on Regression Learning and Corresponding Defenses

Authors: Nicolas Michael Müller, Daniel Kowatsch, Konstantin Böttinger | Published: 2020-09-15
バックドア攻撃
ポイズニング
ロバスト回帰