Recent studies have demonstrated that reinforcement learning (RL) agents are
susceptible to adversarial manipulation, similar to vulnerabilities previously
demonstrated in the supervised learning setting. While most existing work
studies the problem in the context of computer vision or console games, this
paper focuses on reinforcement learning in autonomous cyber defence under
partial observability. We demonstrate that under the black-box setting, where
the attacker has no direct access to the target RL model, causative
attacks---attacks that target the training process---can poison RL agents even
if the attacker only has partial observability of the environment. In addition,
we propose an inversion defence method that aims to apply the opposite
perturbation to that which an attacker might use to generate their adversarial
samples. Our experimental results illustrate that the countermeasure can
effectively reduce the impact of the causative attack, while not significantly
affecting the training process in non-attack scenarios.