Federated Learning (FL) is a machine learning paradigm where local nodes
collaboratively train a central model while the training data remains
decentralized. Existing FL methods typically share model parameters or employ
co-distillation to address the issue of unbalanced data distribution. However,
they suffer from communication bottlenecks. More importantly, they risk privacy
leakage. In this work, we develop a privacy preserving and communication
efficient method in a FL framework with one-shot offline knowledge distillation
using unlabeled, cross-domain public data. We propose a quantized and noisy
ensemble of local predictions from completely trained local models for stronger
privacy guarantees without sacrificing accuracy. Based on extensive experiments
on image classification and text classification tasks, we show that our
privacy-preserving method outperforms baseline FL algorithms with superior
performance in both accuracy and communication efficiency.