These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Two parties wish to collaborate on their datasets. However, before they
reveal their datasets to each other, the parties want to have the guarantee
that the collaboration would be fruitful. We look at this problem from the
point of view of machine learning, where one party is promised an improvement
on its prediction model by incorporating data from the other party. The parties
would only wish to collaborate further if the updated model shows an
improvement in accuracy. Before this is ascertained, the two parties would not
want to disclose their models and datasets. In this work, we construct an
interactive protocol for this problem based on the fully homomorphic encryption
scheme over the Torus (TFHE) and label differential privacy, where the
underlying machine learning model is a neural network. Label differential
privacy is used to ensure that computations are not done entirely in the
encrypted domain, which is a significant bottleneck for neural network training
according to the current state-of-the-art FHE implementations. We formally
prove the security of our scheme assuming honest-but-curious parties, but where
one party may not have any expertise in labelling its initial dataset.
Experiments show that we can obtain the output, i.e., the accuracy of the
updated model, with time many orders of magnitude faster than a protocol using
entirely FHE operations.