Robustness and security of machine learning (ML) systems are intertwined,
wherein a non-robust ML system (classifiers, regressors, etc.) can be subject
to attacks using a wide variety of exploits. With the advent of scalable deep
learning methodologies, a lot of emphasis has been put on the robustness of
supervised, unsupervised and reinforcement learning algorithms. Here, we study
the robustness of the latent space of a deep variational autoencoder (dVAE), an
unsupervised generative framework, to show that it is indeed possible to
perturb the latent space, flip the class predictions and keep the
classification probability approximately equal before and after an attack. This
means that an agent that looks at the outputs of a decoder would remain
oblivious to an attack.