In reward-poisoning attacks against reinforcement learning (RL), an attacker
can perturb the environment reward $r_t$ into $r_t+\delta_t$ at each step, with
the goal of forcing the RL agent to learn a nefarious policy. We categorize
such attacks by the infinity-norm constraint on $\delta_t$: We provide a lower
threshold below which reward-poisoning attack is infeasible and RL is certified
to be safe; we provide a corresponding upper threshold above which the attack
is feasible. Feasible attacks can be further categorized as non-adaptive where
$\delta_t$ depends only on $(s_t,a_t, s_{t+1})$, or adaptive where $\delta_t$
depends further on the RL agent's learning process at time $t$. Non-adaptive
attacks have been the focus of prior works. However, we show that under mild
conditions, adaptive attacks can achieve the nefarious policy in steps
polynomial in state-space size $|S|$, whereas non-adaptive attacks require
exponential steps. We provide a constructive proof that a Fast Adaptive Attack
strategy achieves the polynomial rate. Finally, we show that empirically an
attacker can find effective reward-poisoning attacks using state-of-the-art
deep RL techniques.