敵対的サンプル

Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions

Authors: Jon Vadillo, Roberto Santana, Jose A. Lozano | Published: 2020-04-14 | Updated: 2023-01-25
ロバスト性評価
敵対的サンプル
敵対的学習

Towards Robust Classification with Image Quality Assessment

Authors: Yeli Feng, Yiyu Cai | Published: 2020-04-14
ロバスト性
敵対的サンプル
深層学習

Luring of transferable adversarial perturbations in the black-box paradigm

Authors: Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre | Published: 2020-04-10 | Updated: 2021-03-03
堅牢性向上手法
攻撃の評価
敵対的サンプル

MetaPoison: Practical General-purpose Clean-label Data Poisoning

Authors: W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein | Published: 2020-04-01 | Updated: 2021-02-21
バックドア攻撃
ポイズニング
敵対的サンプル

Adversarial Perturbations Fool Deepfake Detectors

Authors: Apurva Gandhi, Shomik Jain | Published: 2020-03-24 | Updated: 2020-05-15
敵対的サンプル
敵対的攻撃手法
防御手法

One Neuron to Fool Them All

Authors: Anshuman Suri, David Evans | Published: 2020-03-20 | Updated: 2020-06-09
トレーニング手法
ロバスト性
敵対的サンプル

RAB: Provable Robustness Against Backdoor Attacks

Authors: Maurice Weber, Xiaojun Xu, Bojan Karlaš, Ce Zhang, Bo Li | Published: 2020-03-19 | Updated: 2023-08-03
バックドア攻撃
ロバスト性
敵対的サンプル

Adversarial Transferability in Wearable Sensor Systems

Authors: Ramesh Kumar Sah, Hassan Ghasemzadeh | Published: 2020-03-17 | Updated: 2021-07-15
敵対的サンプル
敵対的攻撃手法
非同一データセット

Manifold Regularization for Locally Stable Deep Neural Networks

Authors: Charles Jin, Martin Rinard | Published: 2020-03-09 | Updated: 2020-09-22
トレーニング手法
ロバスト性
敵対的サンプル

Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world

Authors: Ivan Fursov, Alexey Zaytsev, Nikita Kluchnikov, Andrey Kravchenko, Evgeny Burnaev | Published: 2020-03-09 | Updated: 2020-10-12
敵対的サンプル
敵対的攻撃
生成モデル