Flow-based generative models leverage invertible generator functions to fit a
distribution to the training data using maximum likelihood. Despite their use
in several application domains, robustness of these models to adversarial
attacks has hardly been explored. In this paper, we study adversarial
robustness of flow-based generative models both theoretically (for some simple
models) and empirically (for more complex ones). First, we consider a linear
flow-based generative model and compute optimal sample-specific and universal
adversarial perturbations that maximally decrease the likelihood scores. Using
this result, we study the robustness of the well-known adversarial training
procedure, where we characterize the fundamental trade-off between model
robustness and accuracy. Next, we empirically study the robustness of two
prominent deep, non-linear, flow-based generative models, namely GLOW and
RealNVP. We design two types of adversarial attacks; one that minimizes the
likelihood scores of in-distribution samples, while the other that maximizes
the likelihood scores of out-of-distribution ones. We find that GLOW and
RealNVP are extremely sensitive to both types of attacks. Finally, using a
hybrid adversarial training procedure, we significantly boost the robustness of
these generative models.