In this paper, we propose FedGP, a framework for privacy-preserving data
release in the federated learning setting. We use generative adversarial
networks, generator components of which are trained by FedAvg algorithm, to
draw privacy-preserving artificial data samples and empirically assess the risk
of information disclosure. Our experiments show that FedGP is able to generate
labelled data of high quality to successfully train and validate supervised
models. Finally, we demonstrate that our approach significantly reduces
vulnerability of such models to model inversion attacks.