Deep learning classifiers are susceptible to well-crafted, imperceptible
variations of their inputs, known as adversarial attacks. In this regard, the
study of powerful attack models sheds light on the sources of vulnerability in
these classifiers, hopefully leading to more robust ones. In this paper, we
introduce AdvFlow: a novel black-box adversarial attack method on image
classifiers that exploits the power of normalizing flows to model the density
of adversarial examples around a given target image. We see that the proposed
method generates adversaries that closely follow the clean data distribution, a
property which makes their detection less likely. Also, our experimental
results show competitive performance of the proposed approach with some of the
existing attack methods on defended classifiers. The code is available at
https://github.com/hmdolatabadi/AdvFlow.