ポイズニング

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Authors: Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu | Published: 2020-04-22 | Updated: 2020-04-23
プライバシー保護技術
ポイズニング
攻撃タイプ

Headless Horseman: Adversarial Attacks on Transfer Learning Models

Authors: Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu | Published: 2020-04-20
ポイズニング
敵対的摂動手法
機械学習

Data Poisoning Attacks on Federated Machine Learning

Authors: Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Ji Liu | Published: 2020-04-19
ポイズニング
攻撃シナリオ分析
機械学習

Poisoning Attacks on Algorithmic Fairness

Authors: David Solans, Battista Biggio, Carlos Castillo | Published: 2020-04-15 | Updated: 2020-06-26
アルゴリズムの公平性
ポイズニング
最適化手法

Weight Poisoning Attacks on Pre-trained Models

Authors: Keita Kurita, Paul Michel, Graham Neubig | Published: 2020-04-14
バックドア攻撃
ポイズニング
敵対的学習

Towards Federated Learning With Byzantine-Robust Client Weighting

Authors: Amit Portnoy, Yoav Tirosh, Danny Hendler | Published: 2020-04-10 | Updated: 2021-05-18
ポイズニング
ロバスト性向上手法
最適化問題

Deep Learning and Open Set Malware Classification: A Survey

Authors: Jingyun Jia | Published: 2020-04-08
オープンセット認識
ポイズニング
マルウェア分類

An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies

Authors: David Enthoven, Zaid Al-Ars | Published: 2020-04-01
ポイズニング
攻撃の評価
防御手法

MetaPoison: Practical General-purpose Clean-label Data Poisoning

Authors: W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein | Published: 2020-04-01 | Updated: 2021-02-21
バックドア攻撃
ポイズニング
敵対的サンプル

A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks

Authors: Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta | Published: 2020-03-26 | Updated: 2021-12-13
ポイズニング
敵対的攻撃手法
脆弱性攻撃手法