These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning offers a framework of training a machine learning model in
a distributed fashion while preserving privacy of the participants. As the
server cannot govern the clients' actions, nefarious clients may attack the
global model by sending malicious local gradients. In the meantime, there could
also be unreliable clients who are benign but each has a portion of low-quality
training data (e.g., blur or low-resolution images), thus may appearing similar
as malicious clients. Therefore, a defense mechanism will need to perform a
three-fold differentiation which is much more challenging than the conventional
(two-fold) case. This paper introduces MUD-HoG, a novel defense algorithm that
addresses this challenge in federated learning using long-short history of
gradients, and treats the detected malicious and unreliable clients
differently. Not only this, but we can also distinguish between targeted and
untargeted attacks among malicious clients, unlike most prior works which only
consider one type of the attacks. Specifically, we take into account
sign-flipping, additive-noise, label-flipping, and multi-label-flipping
attacks, under a non-IID setting. We evaluate MUD-HoG with six state-of-the-art
methods on two datasets. The results show that MUD-HoG outperforms all of them
in terms of accuracy as well as precision and recall, in the presence of a
mixture of multiple (four) types of attackers as well as unreliable clients.
Moreover, unlike most prior works which can only tolerate a low population of
harmful users, MUD-HoG can work with and successfully detect a wide range of
malicious and unreliable clients - up to 47.5% and 10%, respectively, of the
total population. Our code is open-sourced at
https://github.com/LabSAINT/MUD-HoG_Federated_Learning.