These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Differential privacy (DP) ensures that training a machine learning model does
not leak private data. In practice, we may have access to auxiliary public data
that is free of privacy concerns. In this work, we assume access to a given
amount of public data and settle the following fundamental open questions: 1.
What is the optimal (worst-case) error of a DP model trained over a private
data set while having access to side public data? 2. How can we harness public
data to improve DP model training in practice? We consider these questions in
both the local and central models of pure and approximate DP. To answer the
first question, we prove tight (up to log factors) lower and upper bounds that
characterize the optimal error rates of three fundamental problems: mean
estimation, empirical risk minimization, and stochastic convex optimization. We
show that the optimal error rates can be attained (up to log factors) by either
discarding private data and training a public model, or treating public data
like it is private and using an optimal DP algorithm. To address the second
question, we develop novel algorithms that are "even more optimal" (i.e. better
constants) than the asymptotically optimal approaches described above. For
local DP mean estimation, our algorithm is optimal including constants.
Empirically, our algorithms show benefits over the state-of-the-art.