Group fairness ensures that the outcome of machine learning (ML) based
decision making systems are not biased towards a certain group of people
defined by a sensitive attribute such as gender or ethnicity. Achieving group
fairness in Federated Learning (FL) is challenging because mitigating bias
inherently requires using the sensitive attribute values of all clients, while
FL is aimed precisely at protecting privacy by not giving access to the
clients' data. As we show in this paper, this conflict between fairness and
privacy in FL can be resolved by combining FL with Secure Multiparty
Computation (MPC) and Differential Privacy (DP). In doing so, we propose a
method for training group-fair ML models in cross-device FL under complete and
formal privacy guarantees, without requiring the clients to disclose their
sensitive attribute values.