Most current classifiers are vulnerable to adversarial examples, small input
perturbations that change the classification output. Many existing attack
algorithms cover various settings, from white-box to black-box classifiers, but
typically assume that the answers are deterministic and often fail when they
are not. We therefore propose a new adversarial decision-based attack
specifically designed for classifiers with probabilistic outputs. It is based
on the HopSkipJump attack by Chen et al. (2019, arXiv:1904.02144v5 ), a strong
and query efficient decision-based attack originally designed for deterministic
classifiers. Our P(robabilisticH)opSkipJump attack adapts its amount of queries
to maintain HopSkipJump's original output quality across various noise levels,
while converging to its query efficiency as the noise level decreases. We test
our attack on various noise models, including state-of-the-art off-the-shelf
randomized defenses, and show that they offer almost no extra robustness to
decision-based attacks. Code is available at
https://github.com/cjsg/PopSkipJump .