Input Hessian Regularization of Neural Networks Authors: Waleed Mustafa, Robert A. Vandermeulen, Marius Kloft | Published: 2020-09-14 PoisoningRobust RegressionAdversarial Training 2020.09.14 2025.05.28 Literature Database
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses Authors: Ambar Pal, René Vidal | Published: 2020-09-14 | Updated: 2020-11-11 Game TheoryPoisoningAdversarial Training 2020.09.14 2025.05.28 Literature Database
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent Authors: Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Tolga Tasdizen | Published: 2020-09-10 | Updated: 2023-04-20 PoisoningPerformance EvaluationAdversarial Attack Methods 2020.09.10 2025.05.28 Literature Database
A black-box adversarial attack for poisoning clustering Authors: Antonio Emanuele Cinà, Alessandro Torcinovich, Marcello Pelillo | Published: 2020-09-09 | Updated: 2021-11-10 Backdoor AttackPoisoningContent Specialized for Toxicity Attacks 2020.09.09 2025.05.28 Literature Database
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning Authors: Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro | Published: 2020-09-08 | Updated: 2022-05-27 Backdoor AttackPoisoningMembership Disclosure Risk 2020.09.08 2025.05.28 Literature Database
Detection Defense Against Adversarial Attacks with Saliency Map Authors: Dengpan Ye, Chuanxi Chen, Changrui Liu, Hao Wang, Shunzhi Jiang | Published: 2020-09-06 PoisoningAdversarial ExampleAdversarial Attack Methods 2020.09.06 2025.05.28 Literature Database
Improving Resistance to Adversarial Deformations by Regularizing Gradients Authors: Pengfei Xia, Bin Li | Published: 2020-08-29 | Updated: 2020-10-06 PoisoningAdversarial ExampleAdversarial attack 2020.08.29 2025.05.28 Literature Database
Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning Authors: Yinghua Zhang, Yangqiu Song, Jian Liang, Kun Bai, Qiang Yang | Published: 2020-08-25 PoisoningAdversarial LearningDeep Learning 2020.08.25 2025.05.28 Literature Database
Defending Distributed Classifiers Against Data Poisoning Attacks Authors: Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie | Published: 2020-08-21 PoisoningAttack MethodAdversarial Learning 2020.08.21 2025.05.28 Literature Database
Defending Regression Learners Against Poisoning Attacks Authors: Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz | Published: 2020-08-21 Backdoor AttackPoisoningContent Specialized for Toxicity Attacks 2020.08.21 2025.05.28 Literature Database