These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated Learning (FL) enables multiple clients, such as mobile phones and
IoT devices, to collaboratively train a global machine learning model while
keeping their data localized. However, recent studies have revealed that the
training phase of FL is vulnerable to reconstruction attacks, such as attribute
inference attacks (AIA), where adversaries exploit exchanged messages and
auxiliary public information to uncover sensitive attributes of targeted
clients. While these attacks have been extensively studied in the context of
classification tasks, their impact on regression tasks remains largely
unexplored. In this paper, we address this gap by proposing novel model-based
AIAs specifically designed for regression tasks in FL environments. Our
approach considers scenarios where adversaries can either eavesdrop on
exchanged messages or directly interfere with the training process. We
benchmark our proposed attacks against state-of-the-art methods using
real-world datasets. The results demonstrate a significant increase in
reconstruction accuracy, particularly in heterogeneous client datasets, a
common scenario in FL. The efficacy of our model-based AIAs makes them better
candidates for empirically quantifying privacy leakage for federated regression
tasks.