Generative Adversarial Networks (GANs) have made releasing of synthetic
images a viable approach to share data without releasing the original dataset.
It has been shown that such synthetic data can be used for a variety of
downstream tasks such as training classifiers that would otherwise require the
original dataset to be shared. However, recent work has shown that the GAN
models and their synthetically generated data can be used to infer the training
set membership by an adversary who has access to the entire dataset and some
auxiliary information. Current approaches to mitigate this problem (such as
DPGAN) lead to dramatically poorer generated sample quality than the original
non--private GANs. Here we develop a new GAN architecture (privGAN), where the
generator is trained not only to cheat the discriminator but also to defend
membership inference attacks. The new mechanism provides protection against
this mode of attack while leading to negligible loss in downstream
performances. In addition, our algorithm has been shown to explicitly prevent
overfitting to the training set, which explains why our protection is so
effective. The main contributions of this paper are: i) we propose a novel GAN
architecture that can generate synthetic data in a privacy preserving manner
without additional hyperparameter tuning and architecture selection, ii) we
provide a theoretical understanding of the optimal solution of the privGAN loss
function, iii) we demonstrate the effectiveness of our model against several
white and black--box attacks on several benchmark datasets, iv) we demonstrate
on three common benchmark datasets that synthetic images generated by privGAN
lead to negligible loss in downstream performance when compared against
non--private GANs.