Recent attacks on federated learning demonstrate that keeping the training
data on clients' devices does not provide sufficient privacy, as the model
parameters shared by clients can leak information about their training data. A
'secure aggregation' protocol enables the server to aggregate clients' models
in a privacy-preserving manner. However, existing secure aggregation protocols
incur high computation/communication costs, especially when the number of model
parameters is larger than the number of clients participating in an iteration
-- a typical scenario in federated learning.
In this paper, we propose a secure aggregation protocol, FastSecAgg, that is
efficient in terms of computation and communication, and robust to client
dropouts. The main building block of FastSecAgg is a novel multi-secret sharing
scheme, FastShare, based on the Fast Fourier Transform (FFT), which may be of
independent interest. FastShare is information-theoretically secure, and
achieves a trade-off between the number of secrets, privacy threshold, and
dropout tolerance. Riding on the capabilities of FastShare, we prove that
FastSecAgg is (i) secure against the server colluding with 'any' subset of some
constant fraction (e.g. $\sim10\%$) of the clients in the honest-but-curious
setting; and (ii) tolerates dropouts of a 'random' subset of some constant
fraction (e.g. $\sim10\%$) of the clients. FastSecAgg achieves significantly
smaller computation cost than existing schemes while achieving the same
(orderwise) communication cost. In addition, it guarantees security against
adaptive adversaries, which can perform client corruptions dynamically during
the execution of the protocol.