In this work, we propose a novel framework for privacy-preserving
client-distributed machine learning. It is motivated by the desire to achieve
differential privacy guarantees in the local model of privacy in a way that
satisfies all systems constraints using asynchronous client-server
communication and provides attractive model learning properties. We call it
"Draw and Discard" because it relies on random sampling of models for load
distribution (scalability), which also provides additional server-side privacy
protections and improved model quality through averaging. We present the
mechanics of client and server components of "Draw and Discard" and demonstrate
how the framework can be applied to learning Generalized Linear models. We then
analyze the privacy guarantees provided by our approach against several types
of adversaries and showcase experimental results that provide evidence for the
framework's viability in practical deployments.