These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Replacing non-polynomial functions (e.g., non-linear activation functions
such as ReLU) in a neural network with their polynomial approximations is a
standard practice in privacy-preserving machine learning. The resulting neural
network, called polynomial approximation of neural network (PANN) in this
paper, is compatible with advanced cryptosystems to enable privacy-preserving
model inference. Using ``highly precise'' approximation, state-of-the-art PANN
offers similar inference accuracy as the underlying backbone model. However,
little is known about the effect of approximation, and existing literature
often determined the required approximation precision empirically. In this
paper, we initiate the investigation of PANN as a standalone object.
Specifically, our contribution is two-fold. Firstly, we provide an explanation
on the effect of approximate error in PANN. In particular, we discovered that
(1) PANN is susceptible to some type of perturbations; and (2) weight
regularisation significantly reduces PANN's accuracy. We support our
explanation with experiments. Secondly, based on the insights from our
investigations, we propose solutions to increase inference accuracy for PANN.
Experiments showed that combination of our solutions is very effective: at the
same precision, our PANN is 10% to 50% more accurate than state-of-the-arts;
and at the same accuracy, our PANN only requires a precision of 2^{-9} while
state-of-the-art solution requires a precision of 2^{-12} using the ResNet-20
model on CIFAR-10 dataset.