These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) enables multiple participants to train a global
machine learning model without sharing their private training data.
Peer-to-peer (P2P) FL advances existing centralized FL paradigms by eliminating
the server that aggregates local models from participants and then updates the
global model. However, P2P FL is vulnerable to (i) honest-but-curious
participants whose objective is to infer private training data of other
participants, and (ii) Byzantine participants who can transmit arbitrarily
manipulated local models to corrupt the learning process. P2P FL schemes that
simultaneously guarantee Byzantine resilience and preserve privacy have been
less studied. In this paper, we develop Brave, a protocol that ensures
Byzantine Resilience And privacy-preserving property for P2P FL in the presence
of both types of adversaries. We show that Brave preserves privacy by
establishing that any honest-but-curious adversary cannot infer other
participants' private data by observing their models. We further prove that
Brave is Byzantine-resilient, which guarantees that all benign participants
converge to an identical model that deviates from a global model trained
without Byzantine adversaries by a bounded distance. We evaluate Brave against
three state-of-the-art adversaries on a P2P FL for image classification tasks
on benchmark datasets CIFAR10 and MNIST. Our results show that the global model
learned with Brave in the presence of adversaries achieves comparable
classification accuracy to a global model trained in the absence of any
adversary.