Deep generative models have gained much attention given their ability to
generate data for applications as varied as healthcare to financial technology
to surveillance, and many more - the most popular models being generative
adversarial networks and variational auto-encoders. Yet, as with all machine
learning models, ever is the concern over security breaches and privacy leaks
and deep generative models are no exception. These models have advanced so
rapidly in recent years that work on their security is still in its infancy. In
an attempt to audit the current and future threats against these models, and to
provide a roadmap for defense preparations in the short term, we prepared this
comprehensive and specialized survey on the security and privacy preservation
of GANs and VAEs. Our focus is on the inner connection between attacks and
model architectures and, more specifically, on five components of deep
generative models: the training data, the latent code, the generators/decoders
of GANs/ VAEs, the discriminators/encoders of GANs/ VAEs, and the generated
data. For each model, component and attack, we review the current research
progress and identify the key challenges. The paper concludes with a discussion
of possible future attacks and research directions in the field.