ポイズニング

Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies

Authors: Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, Jiliang Tang | Published: 2020-03-02 | Updated: 2020-12-12
ポイズニング
敵対的サンプル
敵対的学習

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Authors: Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu | Published: 2020-02-28 | Updated: 2020-06-20
ハイパーパラメータ最適化
ポイズニング
ロバスト性評価

Towards Backdoor Attacks and Defense in Robust Machine Learning Models

Authors: Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay | Published: 2020-02-25 | Updated: 2023-01-11
バックドア攻撃
ポイズニング
ロバスト性評価

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

Authors: Minghong Fang, Neil Zhenqiang Gong, Jia Liu | Published: 2020-02-19 | Updated: 2020-05-31
ポイズニング
最大カバレッジ問題
脅威モデリング

Deflecting Adversarial Attacks

Authors: Yao Qin, Nicholas Frosst, Colin Raffel, Garrison Cottrell, Geoffrey Hinton | Published: 2020-02-18
ポイズニング
敵対的攻撃検出
防御手法

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

Authors: Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma | Published: 2020-02-14
ポイズニング
敵対的攻撃検出
防御手法

CEB Improves Model Robustness

Authors: Ian Fischer, Alexander A. Alemi | Published: 2020-02-13
ポイズニング
モデル選択手法
ロバスト性評価

Adversarial Robustness for Code

Authors: Pavol Bielik, Martin Vechev | Published: 2020-02-11 | Updated: 2020-08-15
ポイズニング
堅牢性向上手法
敵対的訓練

Adversarial Data Encryption

Authors: Yingdong Hu, Liang Zhang, Wei Shan, Xiaoxiao Qin, Jing Qi, Zhenzhou Wu, Yang Yuan | Published: 2020-02-10 | Updated: 2020-02-11
ポイズニング
敵対的攻撃
暗号技術

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Authors: Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter | Published: 2020-02-07 | Updated: 2020-08-11
ポイズニング
ロバスト性向上手法
連続的な線形関数