ポイズニング

Improved Adversarial Training via Learned Optimizer

Authors: Yuanhao Xiong, Cho-Jui Hsieh | Published: 2020-04-25
ポイズニング
最適化問題
適応型敵対的訓練

A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers

Authors: Arka Ghosh, Sankha Subhra Mullick, Shounak Datta, Swagatam Das, Rammohan Mallipeddi, Asit Kr. Das | Published: 2020-04-24 | Updated: 2021-09-09
ポイズニング
敵対的攻撃手法
最適化問題

Adversarial Attacks and Defenses: An Interpretation Perspective

Authors: Ninghao Liu, Mengnan Du, Ruocheng Guo, Huan Liu, Xia Hu | Published: 2020-04-23 | Updated: 2020-10-07
ポイズニング
敵対的サンプル
敵対的攻撃手法

How to compare adversarial robustness of classifiers from a global perspective

Authors: Niklas Risse, Christina Göpfert, Jan Philip Göpfert | Published: 2020-04-22 | Updated: 2020-10-15
ポイズニング
ロバスト性分析
評価手法

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Authors: Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu | Published: 2020-04-22 | Updated: 2020-04-23
プライバシー保護技術
ポイズニング
攻撃タイプ

Headless Horseman: Adversarial Attacks on Transfer Learning Models

Authors: Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu | Published: 2020-04-20
ポイズニング
敵対的摂動手法
機械学習

Data Poisoning Attacks on Federated Machine Learning

Authors: Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Ji Liu | Published: 2020-04-19
ポイズニング
攻撃シナリオ分析
機械学習

Poisoning Attacks on Algorithmic Fairness

Authors: David Solans, Battista Biggio, Carlos Castillo | Published: 2020-04-15 | Updated: 2020-06-26
アルゴリズムの公平性
ポイズニング
最適化手法

Weight Poisoning Attacks on Pre-trained Models

Authors: Keita Kurita, Paul Michel, Graham Neubig | Published: 2020-04-14
バックドア攻撃
ポイズニング
敵対的学習

Towards Federated Learning With Byzantine-Robust Client Weighting

Authors: Amit Portnoy, Yoav Tirosh, Danny Hendler | Published: 2020-04-10 | Updated: 2021-05-18
ポイズニング
ロバスト性向上手法
最適化問題