These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated Learning (FL) is a decentralized machine learning approach where
client devices train models locally and send them to a server that performs
aggregation to generate a global model. FL is vulnerable to model inversion
attacks, where the server can infer sensitive client data from trained models.
Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue
by masking each client's trained model using shared secrets and individual
elements generated locally on the client's device. Although SecAgg effectively
preserves privacy, it imposes considerable communication and computation
overhead, especially as network size increases. Building upon SecAgg, this
poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol
that substantially reduces this overhead by using only two shared secrets per
client to mask the model. We propose our method for stable networks with low
delay variation and limited client dropouts. CESA is independent of the data
distribution and network size (for higher than 6 nodes), preventing the
honest-but-curious server from accessing unmasked models. Our initial
evaluation reveals that CESA significantly reduces the communication cost
compared to SecAgg.