Hybrid Privacy-Preserving Neural Network (HPPNN) implementing linear layers
by Homomorphic Encryption (HE) and nonlinear layers by Garbled Circuit (GC) is
one of the most promising secure solutions to emerging Machine Learning as a
Service (MLaaS). Unfortunately, a HPPNN suffers from long inference latency,
e.g., $\sim100$ seconds per image, which makes MLaaS unsatisfactory. Because
HE-based linear layers of a HPPNN cost $93\%$ inference latency, it is critical
to select a set of HE parameters to minimize computational overhead of linear
layers. Prior HPPNNs over-pessimistically select huge HE parameters to maintain
large noise budgets, since they use the same set of HE parameters for an entire
network and ignore the error tolerance capability of a network.
In this paper, for fast and accurate secure neural network inference, we
propose an automated layer-wise parameter selector, AutoPrivacy, that leverages
deep reinforcement learning to automatically determine a set of HE parameters
for each linear layer in a HPPNN. The learning-based HE parameter selection
policy outperforms conventional rule-based HE parameter selection policy.
Compared to prior HPPNNs, AutoPrivacy-optimized HPPNNs reduce inference latency
by $53\%\sim70\%$ with negligible loss of accuracy.