This paper presents the first model extraction attack against Deep
Reinforcement Learning (DRL), which enables an external adversary to precisely
recover a black-box DRL model only from its interaction with the environment.
Model extraction attacks against supervised Deep Learning models have been
widely studied. However, those techniques cannot be applied to the
reinforcement learning scenario due to DRL models' high complexity,
stochasticity and limited observable information. We propose a novel
methodology to overcome the above challenges. The key insight of our approach
is that the process of DRL model extraction is equivalent to imitation
learning, a well-established solution to learn sequential decision-making
policies. Based on this observation, our methodology first builds a classifier
to reveal the training algorithm family of the targeted black-box DRL model
only based on its predicted actions, and then leverages state-of-the-art
imitation learning techniques to replicate the model from the identified
algorithm family. Experimental results indicate that our methodology can
effectively recover the DRL models with high fidelity and accuracy. We also
demonstrate two use cases to show that our model extraction attack can (1)
significantly improve the success rate of adversarial attacks, and (2) steal
DRL models stealthily even they are protected by DNN watermarks. These pose a
severe threat to the intellectual property and privacy protection of DRL
applications.