Since Biggio et al. (2013) and Szegedy et al. (2013) first drew attention to
adversarial examples, there has been a flood of research into defending and
attacking machine learning models. However, almost all proposed attacks assume
white-box access to a model. In other words, the attacker is assumed to have
perfect knowledge of the models weights and architecture. With this insider
knowledge, a white-box attack can leverage gradient information to craft
adversarial examples. Black-box attacks assume no knowledge of the model
weights or architecture. These attacks craft adversarial examples using
information only contained in the logits or hard classification label. Here, we
assume the attacker can use the logits in order to find an adversarial example.
Empirically, we show that 2-sided stochastic gradient estimation techniques are
not sensitive to scaling parameters, and can be used to mount powerful
black-box attacks requiring relatively few model queries.