These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) is an efficient approach for large-scale distributed
machine learning that promises data privacy by keeping training data on client
devices. However, recent research has uncovered vulnerabilities in FL,
impacting both security and privacy through poisoning attacks and the potential
disclosure of sensitive information in individual model updates as well as the
aggregated global model. This paper explores the inadequacies of existing FL
protection measures when applied independently, and the challenges of creating
effective compositions.
Addressing these issues, we propose WW-FL, an innovative framework that
combines secure multi-party computation (MPC) with hierarchical FL to guarantee
data and global model privacy. One notable feature of WW-FL is its capability
to prevent malicious clients from directly poisoning model parameters,
confining them to less destructive data poisoning attacks. We furthermore
provide a PyTorch-based FL implementation integrated with Meta's CrypTen MPC
framework to systematically measure the performance and robustness of WW-FL.
Our extensive evaluation demonstrates that WW-FL is a promising solution for
secure and private large-scale federated learning.