These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) enables multiple participants to collaboratively
train machine learning models while ensuring their data remains private and
secure. Blockchain technology further enhances FL by providing stronger
security, a transparent audit trail, and protection against data tampering and
model manipulation. Most blockchain-secured FL systems rely on conventional
consensus mechanisms: Proof-of-Work (PoW) is computationally expensive, while
Proof-of-Stake (PoS) improves energy efficiency but risks centralization as it
inherently favors participants with larger stakes. Recently, learning-based
consensus has emerged as an alternative by replacing cryptographic tasks with
model training to save energy. However, this approach introduces potential
privacy vulnerabilities, as the training process may inadvertently expose
sensitive information through gradient sharing and model updates. To address
these challenges, we propose a novel Zero-Knowledge Proof of Training (ZKPoT)
consensus mechanism. This method leverages the zero-knowledge succinct
non-interactive argument of knowledge proof (zk-SNARK) protocol to validate
participants' contributions based on their model performance, effectively
eliminating the inefficiencies of traditional consensus methods and mitigating
the privacy risks posed by learning-based consensus. We analyze our system's
security, demonstrating its capacity to prevent the disclosure of sensitive
information about local models or training data to untrusted parties during the
entire FL process. Extensive experiments demonstrate that our system is robust
against privacy and Byzantine attacks while maintaining accuracy and utility
without trade-offs, scalable across various blockchain settings, and efficient
in both computation and communication.