These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) has been introduced to enable a large number of
clients, possibly mobile devices, to collaborate on generating a generalized
machine learning model thanks to utilizing a larger number of local samples
without sharing to offer certain privacy to collaborating clients. However, due
to the participation of a large number of clients, it is often difficult to
profile and verify each client, which leads to a security threat that malicious
participants may hamper the accuracy of the trained model by conveying poisoned
models during the training. Hence, the aggregation framework at the parameter
server also needs to minimize the detrimental effects of these malicious
clients. A plethora of attack and defence strategies have been analyzed in the
literature. However, often the Byzantine problem is analyzed solely from the
outlier detection perspective, being oblivious to the topology of neural
networks (NNs).
In the scope of this work, we argue that by extracting certain side
information specific to the NN topology, one can design stronger attacks.
Hence, inspired by the sparse neural networks, we introduce a hybrid sparse
Byzantine attack that is composed of two parts: one exhibiting a sparse nature
and attacking only certain NN locations with higher sensitivity, and the other
being more silent but accumulating over time, where each ideally targets a
different type of defence mechanism, and together they form a strong but
imperceptible attack. Finally, we show through extensive simulations that the
proposed hybrid Byzantine attack is effective against 8 different defence
methods.