These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With the development of laws and regulations related to privacy preservation,
it has become difficult to collect personal data to perform machine learning.
In this context, federated learning, which is distributed learning without
sharing personal data, has been proposed. In this paper, we focus on federated
learning for user authentication. We show that it is difficult to achieve both
privacy preservation and high accuracy with existing methods. To address these
challenges, we propose IPFed which is privacy-preserving federated learning
using random projection for class embedding. Furthermore, we prove that IPFed
is capable of learning equivalent to the state-of-the-art method. Experiments
on face image datasets show that IPFed can protect the privacy of personal data
while maintaining the accuracy of the state-of-the-art method.
Privacy-preserving machine learning: Threats and solutions
M. Al-Rubaie, J. M. Chang
Published: 2019
arxiv
Cited by 1
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas
Published: 2.18.2016
Modern mobile devices have access to a wealth of data suitable for learning
models, which in turn can greatly improve the user experience on the device.
For example, language models can improve speech recognition and text entry, and
image models can automatically select good photos. However, this rich data is
often privacy sensitive, large in quantity, or both, which may preclude logging
to the data center and training there using conventional approaches. We
advocate an alternative that leaves the training data distributed on the mobile
devices, and learns a shared model by aggregating locally-computed updates. We
term this decentralized approach Federated Learning.
We present a practical method for the federated learning of deep networks
based on iterative model averaging, and conduct an extensive empirical
evaluation, considering five different model architectures and four datasets.
These experiments demonstrate the approach is robust to the unbalanced and
non-IID data distributions that are a defining characteristic of this setting.
Communication costs are the principal constraint, and we show a reduction in
required communication rounds by 10-100x as compared to synchronized stochastic
gradient descent.