These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) provides an effective machine learning (ML)
architecture to protect data privacy in a distributed manner. However, the
inevitable network asynchrony, the over-dependence on a central coordinator,
and the lack of an open and fair incentive mechanism collectively hinder its
further development. We propose \textsc{IronForge}, a new generation of FL
framework, that features a Directed Acyclic Graph (DAG)-based data structure
and eliminates the need for central coordinators to achieve fully decentralized
operations. \textsc{IronForge} runs in a public and open network, and launches
a fair incentive mechanism by enabling state consistency in the DAG, so that
the system fits in networks where training resources are unevenly distributed.
In addition, dedicated defense strategies against prevalent FL attacks on
incentive fairness and data privacy are presented to ensure the security of
\textsc{IronForge}. Experimental results based on a newly developed testbed
FLSim highlight the superiority of \textsc{IronForge} to the existing prevalent
FL frameworks under various specifications in performance, fairness, and
security. To the best of our knowledge, \textsc{IronForge} is the first secure
and fully decentralized FL framework that can be applied in open networks with
realistic network and training settings.