These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) is a machine learning paradigm, which enables
multiple and decentralized clients to collaboratively train a model under the
orchestration of a central aggregator. FL can be a scalable machine learning
solution in big data scenarios. Traditional FL relies on the trust assumption
of the central aggregator, which forms cohorts of clients honestly. However, a
malicious aggregator, in reality, could abandon and replace the client's
training models, or insert fake clients, to manipulate the final training
results. In this work, we introduce zkFL, which leverages zero-knowledge proofs
to tackle the issue of a malicious aggregator during the training model
aggregation process. To guarantee the correct aggregation results, the
aggregator provides a proof per round, demonstrating to the clients that the
aggregator executes the intended behavior faithfully. To further reduce the
verification cost of clients, we use blockchain to handle the proof in a
zero-knowledge way, where miners (i.e., the participants validating and
maintaining the blockchain data) can verify the proof without knowing the
clients' local and aggregated models. The theoretical analysis and empirical
results show that zkFL achieves better security and privacy than traditional
FL, without modifying the underlying FL network structure or heavily
compromising the training speed.