These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) is a decentralized machine learning technique that
allows multiple entities to jointly train a model while preserving dataset
privacy. However, its distributed nature has raised various security concerns,
which have been addressed by increasingly sophisticated defenses. These
protections utilize a range of data sources and metrics to, for example, filter
out malicious model updates, ensuring that the impact of attacks is minimized
or eliminated.
This paper explores the feasibility of designing a generic attack method
capable of installing backdoors in FL while evading a diverse array of
defenses. Specifically, we focus on an attacker strategy called MIGO, which
aims to produce model updates that subtly blend with legitimate ones. The
resulting effect is a gradual integration of a backdoor into the global model,
often ensuring its persistence long after the attack concludes, while
generating enough ambiguity to hinder the effectiveness of defenses.
MIGO was employed to implant three types of backdoors across five datasets
and different model architectures. The results demonstrate the significant
threat posed by these backdoors, as MIGO consistently achieved exceptionally
high backdoor accuracy (exceeding 90%) while maintaining the utility of the
main task. Moreover, MIGO exhibited strong evasion capabilities against ten
defenses, including several state-of-the-art methods. When compared to four
other attack strategies, MIGO consistently outperformed them across most
configurations. Notably, even in extreme scenarios where the attacker controls
just 0.1% of the clients, the results indicate that successful backdoor
insertion is possible if the attacker can persist for a sufficient number of
rounds.