These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Pre-training on public data is an effective method to improve the performance
for federated learning (FL) with differential privacy (DP). This paper
investigates how large language models (LLMs) trained on public data can
improve the quality of pre-training data for the on-device language models
trained with DP and FL. We carefully design LLM prompts to filter and transform
existing public data, and generate new data to resemble the real user data
distribution. The model pre-trained on our synthetic dataset achieves relative
improvement of 19.0% and 22.8% in next word prediction accuracy compared to the
baseline model pre-trained on a standard public dataset, when evaluated over
the real user data in Gboard (Google Keyboard, a production mobile keyboard
application). Furthermore, our method achieves evaluation accuracy better than
or comparable to the baseline during the DP FL fine-tuning over millions of
mobile devices, and our final model outperforms the baseline in production A/B
testing. Our experiments demonstrate the strengths of LLMs in synthesizing data
close to the private distribution even without accessing the private data, and
also suggest future research directions to further reduce the distribution gap.