In federated learning, machine learning and deep learning models are trained
globally on distributed devices. The state-of-the-art privacy-preserving
technique in the context of federated learning is user-level differential
privacy. However, such a mechanism is vulnerable to some specific model
poisoning attacks such as Sybil attacks. A malicious adversary could create
multiple fake clients or collude compromised devices in Sybil attacks to mount
direct model updates manipulation. Recent works on novel defense against model
poisoning attacks are difficult to detect Sybil attacks when differential
privacy is utilized, as it masks clients' model updates with perturbation. In
this work, we implement the first Sybil attacks on differential privacy based
federated learning architectures and show their impacts on model convergence.
We randomly compromise some clients by manipulating different noise levels
reflected by the local privacy budget epsilon of differential privacy on the
local model updates of these Sybil clients such that the global model
convergence rates decrease or even leads to divergence. We apply our attacks to
two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our
evaluation results on the MNIST and CIFAR-10 datasets show that our attacks
effectively slow down the convergence of the global models. We then propose a
method to keep monitoring the average loss of all participants in each round
for convergence anomaly detection and defend our Sybil attacks based on the
prediction cost reported from each client. Our empirical study demonstrates
that our defense approach effectively mitigates the impact of our Sybil attacks
on model convergence.