These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In recent years, federated learning (FL) has become a very popular paradigm
for training distributed, large-scale, and privacy-preserving machine learning
(ML) systems. In contrast to standard ML, where data must be collected at the
exact location where training is performed, FL takes advantage of the
computational capabilities of millions of edge devices to collaboratively train
a shared, global model without disclosing their local private data.
Specifically, in a typical FL system, the central server acts only as an
orchestrator; it iteratively gathers and aggregates all the local models
trained by each client on its private data until convergence. Although FL
undoubtedly has several benefits over traditional ML (e.g., it protects private
data ownership by design), it suffers from several weaknesses. One of the most
critical challenges is to overcome the centralized orchestration of the
classical FL client-server architecture, which is known to be vulnerable to
single-point-of-failure risks and man-in-the-middle attacks, among others. To
mitigate such exposure, decentralized FL solutions have emerged where all FL
clients cooperate and communicate without a central server. This survey
comprehensively summarizes and reviews existing decentralized FL approaches
proposed in the literature. Furthermore, it identifies emerging challenges and
suggests promising research directions in this under-explored domain.