Deep neural network classifiers suffer from adversarial vulnerability:
well-crafted, unnoticeable changes to the input data can affect the classifier
decision. In this regard, the study of powerful adversarial attacks can help
shed light on sources of this malicious behavior. In this paper, we propose a
novel black-box adversarial attack using normalizing flows. We show how an
adversary can be found by searching over a pre-trained flow-based model base
distribution. This way, we can generate adversaries that resemble the original
data closely as the perturbations are in the shape of the data. We then
demonstrate the competitive performance of the proposed approach against
well-known black-box adversarial attack methods.