In Machine Learning, the emergence of \textit{the right to be forgotten} gave
birth to a paradigm named \textit{machine unlearning}, which enables data
holders to proactively erase their data from a trained model. Existing machine
unlearning techniques focus on centralized training, where access to all
holders' training data is a must for the server to conduct the unlearning
process. It remains largely underexplored about how to achieve unlearning when
full access to all training data becomes unavailable. One noteworthy example is
Federated Learning (FL), where each participating data holder trains locally,
without sharing their training data to the central server. In this paper, we
investigate the problem of machine unlearning in FL systems. We start with a
formal definition of the unlearning problem in FL and propose a rapid
retraining approach to fully erase data samples from a trained FL model. The
resulting design allows data holders to jointly conduct the unlearning process
efficiently while keeping their training data locally. Our formal convergence
and complexity analysis demonstrate that our design can preserve model utility
with high efficiency. Extensive evaluations on four real-world datasets
illustrate the effectiveness and performance of our proposed realization.