Machine-learning (ML) algorithms or models, especially deep neural networks
(DNNs), have shown significant promise in several areas. However, researchers
have recently demonstrated that ML algorithms, especially DNNs, are vulnerable
to adversarial examples (slightly perturbed samples that cause
misclassification). The existence of adversarial examples has hindered the
deployment of ML algorithms in safety-critical sectors, such as security.
Several defenses for adversarial examples exist in the literature. One of the
important classes of defenses are manifold-based defenses, where a sample is
``pulled back" into the data manifold before classifying. These defenses rely
on the assumption that data lie in a manifold of a lower dimension than the
input space. These defenses use a generative model to approximate the input
distribution. In this paper, we investigate the following question: do the
generative models used in manifold-based defenses need to be topology-aware? We
suggest the answer is yes, and we provide theoretical and empirical evidence to
support our claim.