These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated Learning (FL) is a distributed machine learning diagram that
enables multiple clients to collaboratively train a global model without
sharing their private local data. However, FL systems are vulnerable to attacks
that are happening in malicious clients through data poisoning and model
poisoning, which can deteriorate the performance of aggregated global model.
Existing defense methods typically focus on mitigating specific types of
poisoning and are often ineffective against unseen types of attack. These
methods also assume an attack happened moderately while is not always holds
true in real. Consequently, these methods can significantly fail in terms of
accuracy and robustness when detecting and addressing updates from attacked
malicious clients. To overcome these challenges, in this work, we propose a
simple yet effective framework to detect malicious clients, namely
Confidence-Aware Defense (CAD), that utilizes the confidence scores of local
models as criteria to evaluate the reliability of local updates. Our key
insight is that malicious attacks, regardless of attack type, will cause the
model to deviate from its previous state, thus leading to increased uncertainty
when making predictions. Therefore, CAD is comprehensively effective for both
model poisoning and data poisoning attacks by accurately identifying and
mitigating potential malicious updates, even under varying degrees of attacks
and data heterogeneity. Experimental results demonstrate that our method
significantly enhances the robustness of FL systems against various types of
attacks across various scenarios by achieving higher model accuracy and
stability.