ポイズニング

Benchmarking Adversarial Robustness

Authors: Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu | Published: 2019-12-26
ポイズニング
敵対的サンプル
防御手法の効果分析

secml: A Python Library for Secure and Explainable Machine Learning

Authors: Maura Pintor, Luca Demetrio, Angelo Sotgiu, Marco Melis, Ambra Demontis, Battista Biggio | Published: 2019-12-20 | Updated: 2022-05-13
ポイズニング
敵対的学習
透かし評価

Advances and Open Problems in Federated Learning

Authors: Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao | Published: 2019-12-10 | Updated: 2021-03-09
セキュアアグリゲーション
プライバシー保護
ポイズニング

Label-Consistent Backdoor Attacks

Authors: Alexander Turner, Dimitris Tsipras, Aleksander Madry | Published: 2019-12-05 | Updated: 2019-12-06
バックドア攻撃
ポイズニング
敵対的サンプル

A Survey of Black-Box Adversarial Attacks on Computer Vision Models

Authors: Siddhant Bhambri, Sumanyu Muku, Avinash Tulasi, Arun Balaji Buduru | Published: 2019-12-03 | Updated: 2020-02-07
ポイズニング
敵対的サンプルの脆弱性
防御手法の効果分析

Data Poisoning Attacks on Neighborhood-based Recommender Systems

Authors: Liang Chen, Yangjun Xu, Fenfang Xie, Min Huang, Zibin Zheng | Published: 2019-12-01
ポイズニング
ロバスト性
最適化問題

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

Authors: Chao Tang, Yifei Fan, Anthony Yezzi | Published: 2019-11-26
ポイズニング
敵対的サンプル
研究方法論

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

Authors: Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong | Published: 2019-11-26 | Updated: 2021-11-21
ポイズニング
モデル性能評価
攻撃タイプ

Adversarial Learning of Privacy-Preserving and Task-Oriented Representations

Authors: Taihong Xiao, Yi-Hsuan Tsai, Kihyuk Sohn, Manmohan Chandraker, Ming-Hsuan Yang | Published: 2019-11-22
プライバシー保護データマイニング
ポイズニング
メンバーシップ推論

Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic

Authors: Zhen Xiang, David J. Miller, Hang Wang, George Kesidis | Published: 2019-11-18 | Updated: 2020-04-06
DDIAの検出と位置特定
バックドア攻撃
ポイズニング