Gaussian processes (GP) are a widely-adopted tool used to sequentially
optimize black-box functions, where evaluations are costly and potentially
noisy. Recent works on GP bandits have proposed to move beyond random noise and
devise algorithms robust to adversarial attacks. This paper studies this
problem from the attacker's perspective, proposing various adversarial attack
methods with differing assumptions on the attacker's strength and prior
information. Our goal is to understand adversarial attacks on GP bandits from
theoretical and practical perspectives. We focus primarily on targeted attacks
on the popular GP-UCB algorithm and a related elimination-based algorithm,
based on adversarially perturbing the function $f$ to produce another function
$\tilde{f}$ whose optima are in some target region $\mathcal{R}_{\rm target}$.
Based on our theoretical analysis, we devise both white-box attacks (known $f$)
and black-box attacks (unknown $f$), with the former including a Subtraction
attack and Clipping attack, and the latter including an Aggressive subtraction
attack. We demonstrate that adversarial attacks on GP bandits can succeed in
forcing the algorithm towards $\mathcal{R}_{\rm target}$ even with a low attack
budget, and we test our attacks' effectiveness on a diverse range of objective
functions.