Many problems in machine learning rely on multi-task learning (MTL), in which
the goal is to solve multiple related machine learning tasks simultaneously.
MTL is particularly relevant for privacy-sensitive applications in areas such
as healthcare, finance, and IoT computing, where sensitive data from multiple,
varied sources are shared for the purpose of learning. In this work, we
formalize notions of client-level privacy for MTL via joint differential
privacy (JDP), a relaxation of differential privacy for mechanism design and
distributed optimization. We then propose an algorithm for mean-regularized
MTL, an objective commonly used for applications in personalized federated
learning, subject to JDP. We analyze our objective and solver, providing
certifiable guarantees on both privacy and utility. Empirically, we find that
our method provides improved privacy/utility trade-offs relative to global
baselines across common federated learning benchmarks.