Large-scale machine learning systems often involve data distributed across a
collection of users. Federated learning algorithms leverage this structure by
communicating model updates to a central server, rather than entire datasets.
In this paper, we study stochastic optimization algorithms for a personalized
federated learning setting involving local and global models subject to
user-level (joint) differential privacy. While learning a private global model
induces a cost of privacy, local learning is perfectly private. We provide
generalization guarantees showing that coordinating local learning with private
centralized learning yields a generically useful and improved tradeoff between
accuracy and privacy. We illustrate our theoretical results with experiments on
synthetic and real-world datasets.