These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
As in traditional machine learning models, models trained with federated
learning may exhibit disparate performance across demographic groups. Model
holders must identify these disparities to mitigate undue harm to the groups.
However, measuring a model's performance in a group requires access to
information about group membership which, for privacy reasons, often has
limited availability. We propose novel locally differentially private
mechanisms to measure differences in performance across groups while protecting
the privacy of group membership. To analyze the effectiveness of the
mechanisms, we bound their error in estimating a disparity when optimized for a
given privacy budget. Our results show that the error rapidly decreases for
realistic numbers of participating clients, demonstrating that, contrary to
what prior work suggested, protecting privacy is not necessarily in conflict
with identifying performance disparities of federated models.