We investigate the rate distortion tradeoff in private read update write
(PRUW) in relation to federated submodel learning (FSL). In FSL a machine
learning (ML) model is divided into multiple submodels based on different types
of data used for training. Each user only downloads and updates the submodel
relevant to its local data. The process of downloading and updating the
required submodel while guaranteeing privacy of the submodel index and the
values of updates is known as PRUW. In this work, we study how the
communication cost of PRUW can be reduced when a pre-determined amount of
distortion is allowed in the reading (download) and writing (upload) phases. We
characterize the rate distortion tradeoff in PRUW along with a scheme that
achieves the lowest communication cost while working under a given distortion
budget.