There are two main attack models considered in the adversarial robustness
literature: black-box and white-box. We consider these threat models as two
ends of a fine-grained spectrum, indexed by the number of queries the adversary
can ask. Using this point of view we investigate how many queries the adversary
needs to make to design an attack that is comparable to the best possible
attack in the white-box model. We give a lower bound on that number of queries
in terms of entropy of decision boundaries of the classifier. Using this result
we analyze two classical learning algorithms on two synthetic tasks for which
we prove meaningful security guarantees. The obtained bounds suggest that some
learning algorithms are inherently more robust against query-bounded
adversaries than others.