These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated Learning (FL) is a collaborative machine learning technique where
multiple clients work together with a central server to train a global model
without sharing their private data. However, the distribution shift across
non-IID datasets of clients poses a challenge to this one-model-fits-all method
hindering the ability of the global model to effectively adapt to each client's
unique local data. To echo this challenge, personalized FL (PFL) is designed to
allow each client to create personalized local models tailored to their private
data. While extensive research has scrutinized backdoor risks in FL, it has
remained underexplored in PFL applications. In this study, we delve deep into
the vulnerabilities of PFL to backdoor attacks. Our analysis showcases a tale
of two cities. On the one hand, the personalization process in PFL can dilute
the backdoor poisoning effects injected into the personalized local models.
Furthermore, PFL systems can also deploy both server-end and client-end defense
mechanisms to strengthen the barrier against backdoor attacks. On the other
hand, our study shows that PFL fortified with these defense methods may offer a
false sense of security. We propose \textit{PFedBA}, a stealthy and effective
backdoor attack strategy applicable to PFL systems. \textit{PFedBA} ingeniously
aligns the backdoor learning task with the main learning task of PFL by
optimizing the trigger generation process. Our comprehensive experiments
demonstrate the effectiveness of \textit{PFedBA} in seamlessly embedding
triggers into personalized local models. \textit{PFedBA} yields outstanding
attack performance across 10 state-of-the-art PFL algorithms, defeating the
existing 6 defense mechanisms. Our study sheds light on the subtle yet potent
backdoor threats to PFL systems, urging the community to bolster defenses
against emerging backdoor challenges.