These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In Federated Learning (FL), a set of clients collaboratively train a machine
learning model (called global model) without sharing their local training data.
The local training data of clients is typically non-i.i.d. and heterogeneous,
resulting in varying contributions from individual clients to the final
performance of the global model. In response, many contribution evaluation
methods were proposed, where the server could evaluate the contribution made by
each client and incentivize the high-contributing clients to sustain their
long-term participation in FL. Existing studies mainly focus on developing new
metrics or algorithms to better measure the contribution of each client.
However, the security of contribution evaluation methods of FL operating in
adversarial environments is largely unexplored. In this paper, we propose the
first model poisoning attack on contribution evaluation methods in FL, termed
ACE. Specifically, we show that any malicious client utilizing ACE could
manipulate the parameters of its local model such that it is evaluated to have
a high contribution by the server, even when its local training data is indeed
of low quality. We perform both theoretical analysis and empirical evaluations
of ACE. Theoretically, we show our design of ACE can effectively boost the
malicious client's perceived contribution when the server employs the
widely-used cosine distance metric to measure contribution. Empirically, our
results show ACE effectively and efficiently deceive five state-of-the-art
contribution evaluation methods. In addition, ACE preserves the accuracy of the
final global models on testing inputs. We also explore six countermeasures to
defend ACE. Our results show they are inadequate to thwart ACE, highlighting
the urgent need for new defenses to safeguard the contribution evaluation
methods in FL.