Many optimization methods for generating black-box adversarial examples have
been proposed, but the aspect of initializing said optimizers has not been
considered in much detail. We show that the choice of starting points is indeed
crucial, and that the performance of state-of-the-art attacks depends on it.
First, we discuss desirable properties of starting points for attacking image
classifiers, and how they can be chosen to increase query efficiency. Notably,
we find that simply copying small patches from other images is a valid
strategy. We then present an evaluation on ImageNet that clearly demonstrates
the effectiveness of this method: Our initialization scheme reduces the number
of queries required for a state-of-the-art Boundary Attack by 81%,
significantly outperforming previous results reported for targeted black-box
adversarial examples.