These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
A Generative Adversarial Network (GAN) is a deep-learning generative model in
the field of Machine Learning (ML) that involves training two Neural Networks
(NN) using a sizable data set. In certain fields, such as medicine, the
training data may be hospital patient records that are stored across different
hospitals. The classic centralized approach would involve sending the data to a
centralized server where the model would be trained. However, that would
involve breaching the privacy and confidentiality of the patients and their
data, which would be unacceptable. Therefore, Federated Learning (FL), an ML
technique that trains ML models in a distributed setting without data ever
leaving the host device, would be a better alternative to the centralized
option. In this ML technique, only parameters and certain metadata would be
communicated. In spite of that, there still exist attacks that can infer user
data using the parameters and metadata. A fully privacy-preserving solution
involves homomorphically encrypting (HE) the data communicated. This paper will
focus on the performance loss of training an FL-GAN with three different types
of Homomorphic Encryption: Partial Homomorphic Encryption (PHE), Somewhat
Homomorphic Encryption (SHE), and Fully Homomorphic Encryption (FHE). We will
also test the performance loss of Multi-Party Computations (MPC), as it has
homomorphic properties. The performances will be compared to the performance of
training an FL-GAN without encryption as well. Our experiments show that the
more complex the encryption method is, the longer it takes, with the extra time
taken for HE is quite significant in comparison to the base case of FL.