Data Poisoning Attacks Against Federated Learning Systems Authors: Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, Ling Liu | Published: 2020-07-16 | Updated: 2020-08-11 ポイズニング性能評価攻撃手法 2020.07.16 2025.04.03 文献データベース
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows Authors: Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie | Published: 2020-07-15 | Updated: 2020-10-23 性能評価攻撃手法生成モデル特性 2020.07.15 2025.04.03 文献データベース
Robustifying Reinforcement Learning Agents via Action Space Adversarial Training Authors: Kai Liang Tan, Yasaman Esfandiari, Xian Yeow Lee, Aakanksha, Soumik Sarkar | Published: 2020-07-14 性能評価攻撃手法防御メカニズム 2020.07.14 2025.04.03 文献データベース
Security and Machine Learning in the Real World Authors: Ivan Evtimov, Weidong Cui, Ece Kamar, Emre Kiciman, Tadayoshi Kohno, Jerry Li | Published: 2020-07-13 セキュリティ分析攻撃手法敵対的サンプル 2020.07.13 2025.04.03 文献データベース
A simple defense against adversarial attacks on heatmap explanations Authors: Laura Rieger, Lars Kai Hansen | Published: 2020-07-13 ポイズニング攻撃手法防御メカニズム 2020.07.13 2025.04.03 文献データベース
Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes Authors: Satya Narayan Shukla, Anit Kumar Sahu, Devin Willmott, J. Zico Kolter | Published: 2020-07-13 | Updated: 2021-06-11 攻撃手法次元削減手法深層学習 2020.07.13 2025.04.03 文献データベース
ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples Authors: Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alaneem, Abdulaziz Alhumam, Mohammed Anan | Published: 2020-07-11 攻撃手法敵対的サンプル防御メカニズム 2020.07.11 2025.04.03 文献データベース
Generating Adversarial Inputs Using A Black-box Differential Technique Authors: João Batista Pereira Matos Juúnior, Lucas Carvalho Cordeiro, Marcelo d'Amorim, Xiaowei Huang | Published: 2020-07-10 性能評価攻撃手法敵対的サンプル 2020.07.10 2025.04.03 文献データベース
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning Authors: Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, Dimitris Papailiopoulos | Published: 2020-07-09 ポイズニングモデルの堅牢性攻撃手法 2020.07.09 2025.04.03 文献データベース
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks Authors: Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P Dickerson, Tom Goldstein | Published: 2020-06-22 | Updated: 2021-06-17 ポイズニングポイズニング攻撃攻撃手法 2020.06.22 2025.04.03 文献データベース