These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Training a machine learning model over an encrypted dataset is an existing
promising approach to address the privacy-preserving machine learning task,
however, it is extremely challenging to efficiently train a deep neural network
(DNN) model over encrypted data for two reasons: first, it requires large-scale
computation over huge datasets; second, the existing solutions for computation
over encrypted data, such as homomorphic encryption, is inefficient. Further,
for an enhanced performance of a DNN model, we also need to use huge training
datasets composed of data from multiple data sources that may not have
pre-established trust relationships among each other. We propose a novel
framework, NN-EMD, to train DNN over multiple encrypted datasets collected from
multiple sources. Toward this, we propose a set of secure computation protocols
using hybrid functional encryption schemes. We evaluate our framework for
performance with regards to the training time and model accuracy on the MNIST
datasets. Compared to other existing frameworks, our proposed NN-EMD framework
can significantly reduce the training time, while providing comparable model
accuracy and privacy guarantees as well as supporting multiple data sources.
Furthermore, the depth and complexity of neural networks do not affect the
training time despite introducing a privacy-preserving NN-EMD setting.