To study the resilience of distributed learning, the "Byzantine" literature
considers a strong threat model where workers can report arbitrary gradients to
the parameter server. Whereas this model helped obtain several fundamental
results, it has sometimes been considered unrealistic, when the workers are
mostly trustworthy machines. In this paper, we show a surprising equivalence
between this model and data poisoning, a threat considered much more realistic.
More specifically, we prove that every gradient attack can be reduced to data
poisoning, in any personalized federated learning system with PAC guarantees
(which we show are both desirable and realistic). This equivalence makes it
possible to obtain new impossibility results on the resilience of any "robust"
learning algorithm to data poisoning in highly heterogeneous applications, as
corollaries of existing impossibility theorems on Byzantine machine learning.
Moreover, using our equivalence, we derive a practical attack that we show
(theoretically and empirically) can be very effective against classical
personalized federated learning models.