Graph Neural Networks (GNNs), which generalize traditional deep neural
networks on graph data, have achieved state-of-the-art performance on several
graph analytical tasks. We focus on how trained GNN models could leak
information about the \emph{member} nodes that they were trained on. We
introduce two realistic settings for performing a membership inference (MI)
attack on GNNs. While choosing the simplest possible attack model that utilizes
the posteriors of the trained model (black-box access), we thoroughly analyze
the properties of GNNs and the datasets which dictate the differences in their
robustness towards MI attack. While in traditional machine learning models,
overfitting is considered the main cause of such leakage, we show that in GNNs
the additional structural information is the major contributing factor. We
support our findings by extensive experiments on four representative GNN
models. To prevent MI attacks on GNN, we propose two effective defenses that
significantly decreases the attacker's inference by up to 60% without
degradation to the target model's performance. Our code is available at
https://github.com/iyempissy/rebMIGraph.