These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
When training a machine learning model with differential privacy, one sets a
privacy budget. This budget represents a maximal privacy violation that any
user is willing to face by contributing their data to the training set. We
argue that this approach is limited because different users may have different
privacy expectations. Thus, setting a uniform privacy budget across all points
may be overly conservative for some users or, conversely, not sufficiently
protective for others. In this paper, we capture these preferences through
individualized privacy budgets. To demonstrate their practicality, we introduce
a variant of Differentially Private Stochastic Gradient Descent (DP-SGD) which
supports such individualized budgets. DP-SGD is the canonical approach to
training models with differential privacy. We modify its data sampling and
gradient noising mechanisms to arrive at our approach, which we call
Individualized DP-SGD (IDP-SGD). Because IDP-SGD provides privacy guarantees
tailored to the preferences of individual users and their data points, we find
it empirically improves privacy-utility trade-offs.