Nuria Rodríguez-Barroso;Eugenio Martínez-Cámara;M. Victoria Luzón;Francisco Herrera
公開日
2020-7-30
更新日
2022-2-25
所属機関
Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada
Federated learning, as a distributed learning that conducts the training on
the local devices without accessing to the training data, is vulnerable to
Byzatine poisoning adversarial attacks. We argue that the federated learning
model has to avoid those kind of adversarial attacks through filtering out the
adversarial clients by means of the federated aggregation operator. We propose
a dynamic federated aggregation operator that dynamically discards those
adversarial clients and allows to prevent the corruption of the global learning
model. We assess it as a defense against adversarial attacks deploying a deep
learning classification model in a federated learning setting on the Fed-EMNIST
Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the
dynamic selection of the clients to aggregate enhances the performance of the
global learning model and discards the adversarial and poor (with low quality
models) clients.