TOP Literature Database Private Read Update Write (PRUW) in Federated Submodel Learning (FSL): Communication Efficient Schemes With and Without Sparsification
Private Read Update Write (PRUW) in Federated Submodel Learning (FSL): Communication Efficient Schemes With and Without Sparsification
AI Security Portal bot
Information in the literature database is collected automatically.
These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
We investigate the problem of private read update write (PRUW) in relation to
private federated submodel learning (FSL), where a machine learning model is
divided into multiple submodels based on the different types of data used to
train the model. In PRUW, each user downloads the required submodel without
revealing its index in the reading phase, and uploads the updates of the
submodel without revealing the submodel index or the values of the updates in
the writing phase. In this work, we first provide a basic communication
efficient PRUW scheme, and study further means of reducing the communication
cost via sparsification. Gradient sparsification is a widely used concept in
learning applications, where only a selected set of parameters is downloaded
and updated, which significantly reduces the communication cost. In this paper,
we study how the concept of sparsification can be incorporated in private FSL
with the goal of reducing the communication cost, while guaranteeing
information theoretic privacy of the updated submodel index as well as the
values of the updates. To this end, we introduce two schemes: PRUW with top $r$
sparsification and PRUW with random sparsification. The former communicates
only the most significant parameters/updates among the servers and the users,
while the latter communicates a randomly selected set of parameters/updates.
The two proposed schemes introduce novel techniques such as parameter/update
(noisy) permutations to handle the additional sources of information leakage in
PRUW caused by sparsification. Both schemes result in significantly reduced
communication costs compared to that of the basic (non-sparse) PRUW scheme.