Transferring Backdoors between Large Language Models by Knowledge Distillation Authors: Pengzhou Cheng, Zongru Wu, Tianjie Ju, Wei Du, Zhuosheng Zhang Gongshen Liu | Published: 2024-08-19 LLM SecurityBackdoor AttackPoisoning 2024.08.19 2025.05.27 Literature Database
Regularization for Adversarial Robust Learning Authors: Jie Wang, Rui Gao, Yao Xie | Published: 2024-08-19 | Updated: 2024-08-22 AlgorithmPoisoningRegularization 2024.08.19 2025.05.27 Literature Database
Random Gradient Masking as a Defensive Measure to Deep Leakage in Federated Learning Authors: Joon Kim, Sejin Park | Published: 2024-08-15 WatermarkingPoisoningDefense Method 2024.08.15 2025.05.27 Literature Database
FedMADE: Robust Federated Learning for Intrusion Detection in IoT Networks Using a Dynamic Aggregation Method Authors: Shihua Sun, Pragya Sharma, Kenechukwu Nwodo, Angelos Stavrou, Haining Wang | Published: 2024-08-13 Client ClusteringPoisoningOptimization Problem 2024.08.13 2025.05.27 Literature Database
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense Authors: Qilei Li, Ahmed M. Abdelmoniem | Published: 2024-08-05 | Updated: 2024-08-16 DoS MitigationPoisoningDefense Method 2024.08.05 2025.05.27 Literature Database
Model Hijacking Attack in Federated Learning Authors: Zheng Li, Siyuan Wu, Ruichuan Chen, Paarijaat Aditya, Istemi Ekin Akkus, Manohar Vanga, Min Zhang, Hao Li, Yang Zhang | Published: 2024-08-04 WatermarkingClass Mapping MethodPoisoning 2024.08.04 2025.05.27 Literature Database
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks Authors: Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann | Published: 2024-07-15 | Updated: 2024-10-14 Backdoor AttackPoisoningOptimization Problem 2024.07.15 2025.05.27 Literature Database
A Geometric Framework for Adversarial Vulnerability in Machine Learning Authors: Brian Bell | Published: 2024-07-03 PoisoningAdversarial ExampleLiterature List 2024.07.03 2025.05.27 Literature Database
Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models Authors: Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang, Siheng Chen | Published: 2024-06-15 LLM SecurityPrompt InjectionPoisoning 2024.06.15 2025.05.27 Literature Database
RMF: A Risk Measurement Framework for Machine Learning Models Authors: Jan Schröder, Jakub Breier | Published: 2024-06-15 Backdoor AttackPoisoningRisk Management 2024.06.15 2025.05.27 Literature Database