Motivated by the ever-increasing concerns on personal data privacy and the
rapidly growing data volume at local clients, federated learning (FL) has
emerged as a new machine learning setting. An FL system is comprised of a
central parameter server and multiple local clients. It keeps data at local
clients and learns a centralized model by sharing the model parameters learned
locally. No local data needs to be shared, and privacy can be well protected.
Nevertheless, since it is the model instead of the raw data that is shared, the
system can be exposed to the poisoning model attacks launched by malicious
clients. Furthermore, it is challenging to identify malicious clients since no
local client data is available on the server. Besides, membership inference
attacks can still be performed by using the uploaded model to estimate the
client's local data, leading to privacy disclosure. In this work, we first
propose a model update based federated averaging algorithm to defend against
Byzantine attacks such as additive noise attacks and sign-flipping attacks. The
individual client model initialization method is presented to provide further
privacy protections from the membership inference attacks by hiding the
individual local machine learning model. When combining these two schemes,
privacy and security can be both effectively enhanced. The proposed schemes are
proved to converge experimentally under non-IID data distribution when there
are no attacks. Under Byzantine attacks, the proposed schemes perform much
better than the classical model based FedAvg algorithm.