Deep learning has achieved overwhelming success, spanning from discriminative
models to generative models. In particular, deep generative models have
facilitated a new level of performance in a myriad of areas, ranging from media
manipulation to sanitized dataset generation. Despite the great success, the
potential risks of privacy breach caused by generative models have not been
analyzed systematically. In this paper, we focus on membership inference attack
against deep generative models that reveals information about the training data
used for victim models. Specifically, we present the first taxonomy of
membership inference attacks, encompassing not only existing attacks but also
our novel ones. In addition, we propose the first generic attack model that can
be instantiated in a large range of settings and is applicable to various kinds
of deep generative models. Moreover, we provide a theoretically grounded attack
calibration technique, which consistently boosts the attack performance in all
cases, across different attack settings, data modalities, and training
configurations. We complement the systematic analysis of attack performance by
a comprehensive experimental study, that investigates the effectiveness of
various attacks w.r.t. model type and training configurations, over three
diverse application scenarios (i.e., images, medical data, and location data).