Graph Neural Networks (GNNs) are a popular technique for modelling
graph-structured data and computing node-level representations via aggregation
of information from the neighborhood of each node. However, this aggregation
implies an increased risk of revealing sensitive information, as a node can
participate in the inference for multiple nodes. This implies that standard
privacy-preserving machine learning techniques, such as differentially private
stochastic gradient descent (DP-SGD) - which are designed for situations where
each data point participates in the inference for one point only - either do
not apply, or lead to inaccurate models. In this work, we formally define the
problem of learning GNN parameters with node-level privacy, and provide an
algorithmic solution with a strong differential privacy guarantee. We employ a
careful sensitivity analysis and provide a non-trivial extension of the
privacy-by-amplification technique to the GNN setting. An empirical evaluation
on standard benchmark datasets demonstrates that our method is indeed able to
learn accurate privacy-preserving GNNs which outperform both private and
non-private methods that completely ignore graph information.