These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In recent years, Local Differential Privacy (LDP), a robust
privacy-preserving methodology, has gained widespread adoption in real-world
applications. With LDP, users can perturb their data on their devices before
sending it out for analysis. However, as the collection of multiple sensitive
information becomes more prevalent across various industries, collecting a
single sensitive attribute under LDP may not be sufficient. Correlated
attributes in the data may still lead to inferences about the sensitive
attribute. This paper empirically studies the impact of collecting multiple
sensitive attributes under LDP on fairness. We propose a novel privacy budget
allocation scheme that considers the varying domain size of sensitive
attributes. This generally led to a better privacy-utility-fairness trade-off
in our experiments than the state-of-art solution. Our results show that LDP
leads to slightly improved fairness in learning problems without significantly
affecting the performance of the models. We conduct extensive experiments
evaluating three benchmark datasets using several group fairness metrics and
seven state-of-the-art LDP protocols. Overall, this study challenges the common
belief that differential privacy necessarily leads to worsened fairness in
machine learning.