Federated Learning enables one to jointly train a machine learning model
across distributed clients holding sensitive datasets. In real-world settings,
this approach is hindered by expensive communication and privacy concerns. Both
of these challenges have already been addressed individually, resulting in
competing optimisations. In this article, we tackle them simultaneously for one
of the first times. More precisely, we adapt compression-based federated
techniques to additive secret sharing, leading to an efficient secure
aggregation protocol, with an adaptable security level. We prove its privacy
against malicious adversaries and its correctness in the semi-honest setting.
Experiments on deep convolutional networks demonstrate that our secure protocol
achieves high accuracy with low communication costs. Compared to prior works on
secure aggregation, our protocol has a lower communication and computation
costs for a similar accuracy.