Federated learning is a recent advance in privacy protection. In this
context, a trusted curator aggregates parameters optimized in decentralized
fashion by multiple clients. The resulting model is then distributed back to
all clients, ultimately converging to a joint representative model without
explicitly having to share the data. However, the protocol is vulnerable to
differential attacks, which could originate from any party contributing during
federated optimization. In such an attack, a client's contribution during
training and information about their data set is revealed through analyzing the
distributed model. We tackle this problem and propose an algorithm for client
sided differential privacy preserving federated optimization. The aim is to
hide clients' contributions during training, balancing the trade-off between
privacy loss and model performance. Empirical studies suggest that given a
sufficiently large number of participating clients, our proposed procedure can
maintain client-level differential privacy at only a minor cost in model
performance.