Many real-world data come in the form of graphs. Graph neural networks
(GNNs), a new family of machine learning (ML) models, have been proposed to
fully leverage graph data to build powerful applications. In particular, the
inductive GNNs, which can generalize to unseen data, become mainstream in this
direction. Machine learning models have shown great potential in various tasks
and have been deployed in many real-world scenarios. To train a good model, a
large amount of data as well as computational resources are needed, leading to
valuable intellectual property. Previous research has shown that ML models are
prone to model stealing attacks, which aim to steal the functionality of the
target models. However, most of them focus on the models trained with images
and texts. On the other hand, little attention has been paid to models trained
with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the
first model stealing attacks against inductive GNNs. We systematically define
the threat model and propose six attacks based on the adversary's background
knowledge and the responses of the target models. Our evaluation on six
benchmark datasets shows that the proposed model stealing attacks against GNNs
achieve promising performance.