In domains where data are sensitive or private, there is great value in
methods that can learn in a distributed manner without the data ever leaving
the local devices. In light of this need, federated learning has emerged as a
popular training paradigm. However, many federated learning approaches trade
transmitting data for communicating updated weight parameters for each local
device. Therefore, a successful breach that would have otherwise directly
compromised the data instead grants whitebox access to the local model, which
opens the door to a number of attacks, including exposing the very data
federated learning seeks to protect. Additionally, in distributed scenarios,
individual client devices commonly exhibit high statistical heterogeneity. Many
common federated approaches learn a single global model; while this may do well
on average, performance degrades when the i.i.d. assumption is violated,
underfitting individuals further from the mean, and raising questions of
fairness. To address these issues, we propose Weight Anonymized Factorization
for Federated Learning (WAFFLe), an approach that combines the Indian Buffet
Process with a shared dictionary of weight factors for neural networks.
Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe's
significant improvement to local test performance and fairness while
simultaneously providing an extra layer of security.