These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning has created a decentralized method to train a machine
learning model without needing direct access to client data. The main goal of a
federated learning architecture is to protect the privacy of each client while
still contributing to the training of the global model. However, the main
advantage of privacy in federated learning is also the easiest aspect to
exploit. Without being able to see the clients' data, it is difficult to
determine the quality of the data. By utilizing data poisoning methods, such as
backdoor or label-flipping attacks, or by sending manipulated information about
their data back to the server, malicious clients are able to corrupt the global
model and degrade performance across all clients within a federation. Our novel
aggregation method, FedBayes, mitigates the effect of a malicious client by
calculating the probabilities of a client's model weights given to the prior
model's weights using Bayesian statistics. Our results show that this approach
negates the effects of malicious clients and protects the overall federation.