These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In this paper, we present VerifyML, the first secure inference framework to
check the fairness degree of a given Machine learning (ML) model. VerifyML is
generic and is immune to any obstruction by the malicious model holder during
the verification process. We rely on secure two-party computation (2PC)
technology to implement VerifyML, and carefully customize a series of
optimization methods to boost its performance for both linear and nonlinear
layer execution. Specifically, (1) VerifyML allows the vast majority of the
overhead to be performed offline, thus meeting the low latency requirements for
online inference. (2) To speed up offline preparation, we first design novel
homomorphic parallel computing techniques to accelerate the authenticated
Beaver's triple (including matrix-vector and convolution triples) generation
procedure. It achieves up to $1.7\times$ computation speedup and gains at least
$10.7\times$ less communication overhead compared to state-of-the-art work. (3)
We also present a new cryptographic protocol to evaluate the activation
functions of non-linear layers, which is $4\times$--$42\times$ faster and has
$>48\times$ lesser communication than existing 2PC protocol against malicious
parties. In fact, VerifyML even beats the state-of-the-art semi-honest ML
secure inference system! We provide formal theoretical analysis for VerifyML
security and demonstrate its performance superiority on mainstream ML models
including ResNet-18 and LeNet.