Content Specialized for Toxicity Attacks

De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks

Authors: Jian Chen, Xuxin Zhang, Rui Zhang, Chen Wang, Ling Liu | Published: 2021-05-08
Poisoning
Content Specialized for Toxicity Attacks
Challenges of Generative Models

Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers

Authors: Tzvika Shapira, David Berend, Ishai Rosenberg, Yang Liu, Asaf Shabtai, Yuval Elovici | Published: 2020-10-30
Backdoor Attack
Malware Detection
Content Specialized for Toxicity Attacks

Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks

Authors: Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Andreas Spanias | Published: 2020-09-30
Graph Neural Network
Poisoning
Content Specialized for Toxicity Attacks

A black-box adversarial attack for poisoning clustering

Authors: Antonio Emanuele Cinà, Alessandro Torcinovich, Marcello Pelillo | Published: 2020-09-09 | Updated: 2021-11-10
Backdoor Attack
Poisoning
Content Specialized for Toxicity Attacks

Defending Regression Learners Against Poisoning Attacks

Authors: Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz | Published: 2020-08-21
Backdoor Attack
Poisoning
Content Specialized for Toxicity Attacks

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Authors: Xiang Zhang, Marinka Zitnik | Published: 2020-06-15 | Updated: 2020-10-28
Graph Neural Network
Adversarial attack
Content Specialized for Toxicity Attacks

Dynamic Backdoor Attacks Against Machine Learning Models

Authors: Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang | Published: 2020-03-07 | Updated: 2022-03-03
Poisoning
Content Specialized for Toxicity Attacks
Defense Method

Can’t Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

Authors: Moshe Kravchik, Asaf Shabtai | Published: 2020-02-07
Poisoning
Robustness Improvement Method
Content Specialized for Toxicity Attacks

Regularization Helps with Mitigating Poisoning Attacks: Distributionally-Robust Machine Learning Using the Wasserstein Distance

Authors: Farhad Farokhi | Published: 2020-01-29
Robustness Improvement Method
Content Specialized for Toxicity Attacks
Continuous Linear Function

A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning

Authors: Xuanqing Liu, Si Si, Xiaojin Zhu, Yang Li, Cho-Jui Hsieh | Published: 2019-10-30
Convergence analysis
Attack Method
Content Specialized for Toxicity Attacks