Federated Learning (FL) allows parties to learn a shared prediction model by
delegating the training computation to clients and aggregating all the
separately trained models on the server. To prevent private information being
inferred from local models, Secure Aggregation (SA) protocols are used to
ensure that the server is unable to inspect individual trained models as it
aggregates them. However, current implementations of SA in FL frameworks have
limitations, including vulnerability to client dropouts or configuration
difficulties.
In this paper, we present Salvia, an implementation of SA for Python users in
the Flower FL framework. Based on the SecAgg(+) protocols for a semi-honest
threat model, Salvia is robust against client dropouts and exposes a flexible
and easy-to-use API that is compatible with various machine learning
frameworks. We show that Salvia's experimental performance is consistent with
SecAgg(+)'s theoretical computation and communication complexities.