Poisoning

Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks

Authors: Hetvi Waghela, Jaydip Sen, Sneha Rakshit | Published: 2024-08-20
Poisoning
Adversarial Example
Defense Method

Transferring Backdoors between Large Language Models by Knowledge Distillation

Authors: Pengzhou Cheng, Zongru Wu, Tianjie Ju, Wei Du, Zhuosheng Zhang Gongshen Liu | Published: 2024-08-19
LLM Security
Backdoor Attack
Poisoning

Regularization for Adversarial Robust Learning

Authors: Jie Wang, Rui Gao, Yao Xie | Published: 2024-08-19 | Updated: 2024-08-22
Algorithm
Poisoning
Regularization

Random Gradient Masking as a Defensive Measure to Deep Leakage in Federated Learning

Authors: Joon Kim, Sejin Park | Published: 2024-08-15
Watermarking
Poisoning
Defense Method

FedMADE: Robust Federated Learning for Intrusion Detection in IoT Networks Using a Dynamic Aggregation Method

Authors: Shihua Sun, Pragya Sharma, Kenechukwu Nwodo, Angelos Stavrou, Haining Wang | Published: 2024-08-13
Client Clustering
Poisoning
Optimization Problem

Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense

Authors: Qilei Li, Ahmed M. Abdelmoniem | Published: 2024-08-05 | Updated: 2024-08-16
DoS Mitigation
Poisoning
Defense Method

Model Hijacking Attack in Federated Learning

Authors: Zheng Li, Siyuan Wu, Ruichuan Chen, Paarijaat Aditya, Istemi Ekin Akkus, Manohar Vanga, Min Zhang, Hao Li, Yang Zhang | Published: 2024-08-04
Watermarking
Class Mapping Method
Poisoning

Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks

Authors: Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann | Published: 2024-07-15 | Updated: 2024-10-14
Backdoor Attack
Poisoning
Optimization Problem

A Geometric Framework for Adversarial Vulnerability in Machine Learning

Authors: Brian Bell | Published: 2024-07-03
Poisoning
Adversarial Example
Literature List

Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models

Authors: Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang, Siheng Chen | Published: 2024-06-15
LLM Security
Prompt Injection
Poisoning