We demonstrate that it is possible to train large recurrent language models
with user-level differential privacy guarantees with only a negligible cost in
predictive accuracy. Our work builds on recent advances in the training of deep
networks on user-partitioned data and privacy accounting for stochastic
gradient descent. In particular, we add user-level privacy protection to the
federated averaging algorithm, which makes "large step" updates from user-level
data. Our work demonstrates that given a dataset with a sufficiently large
number of users (a requirement easily met by even small internet-scale
datasets), achieving differential privacy comes at the cost of increased
computation, rather than in decreased utility as in most prior work. We find
that our private LSTM language models are quantitatively and qualitatively
similar to un-noised models when trained on a large dataset.