Federated learning is a recently proposed paradigm that enables multiple
clients to collaboratively train a joint model. It allows clients to train
models locally, and leverages the parameter server to generate a global model
by aggregating the locally submitted gradient updates at each round. Although
the incentive model for federated learning has not been fully developed, it is
supposed that participants are able to get rewards or the privilege to use the
final global model, as a compensation for taking efforts to train the model.
Therefore, a client who does not have any local data has the incentive to
construct local gradient updates in order to deceive for rewards. In this
paper, we are the first to propose the notion of free rider attacks, to explore
possible ways that an attacker may construct gradient updates, without any
local training data. Furthermore, we explore possible defenses that could
detect the proposed attacks, and propose a new high dimensional detection
method called STD-DAGMM, which particularly works well for anomaly detection of
model parameters. We extend the attacks and defenses to consider more free
riders as well as differential privacy, which sheds light on and calls for
future research in this field.