We investigate the problem of private read update write (PRUW) in federated
submodel learning (FSL) with sparsification. In FSL, a machine learning model
is divided into multiple submodels, where each user updates only the submodel
that is relevant to the user's local data. PRUW is the process of privately
performing FSL by reading from and writing to the required submodel without
revealing the submodel index, or the values of updates to the databases.
Sparsification is a widely used concept in learning, where the users update
only a small fraction of parameters to reduce the communication cost. Revealing
the coordinates of these selected (sparse) updates leaks privacy of the user.
We show how PRUW in FSL can be performed with sparsification. We propose a
novel scheme which privately reads from and writes to arbitrary parameters of
any given submodel, without revealing the submodel index, values of updates, or
the coordinates of the sparse updates, to databases. The proposed scheme
achieves significantly lower reading and writing costs compared to what is
achieved without sparsification.