Recent developments have established the vulnerability of deep reinforcement
learning to policy manipulation attacks via intentionally perturbed inputs,
known as adversarial examples. In this work, we propose a technique for
mitigation of such attacks based on addition of noise to the parameter space of
deep reinforcement learners during training. We experimentally verify the
effect of parameter-space noise in reducing the transferability of adversarial
examples, and demonstrate the promising performance of this technique in
mitigating the impact of whitebox and blackbox attacks at both test and
training times.