These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Secure Multiparty Computation (MPC) protocols enable secure evaluation of a
circuit by several parties, even in the presence of an adversary who
maliciously corrupts all but one of the parties. These MPC protocols are
constructed using the well-known secret-sharing-based paradigm (SPDZ and
SPDZ2k), where the protocols ensure security against a malicious adversary by
computing Message Authentication Code (MAC) tags on the input shares and then
evaluating the circuit with these input shares and tags. However, this tag
computation adds a significant runtime overhead, particularly for machine
learning (ML) applications with numerous linear computation layers such as
convolutions and fully connected layers.
To alleviate the tag computation overhead, we introduce CompactTag, a
lightweight algorithm for generating MAC tags specifically tailored for linear
layers in ML. Linear layer operations in ML, including convolutions, can be
transformed into Toeplitz matrix multiplications. For the multiplication of two
matrices with dimensions T1 x T2 and T2 x T3 respectively, SPDZ2k required O(T1
x T2 x T3) local multiplications for the tag computation. In contrast,
CompactTag only requires O(T1 x T2 + T1 x T3 + T2 x T3) local multiplications,
resulting in a substantial performance boost for various ML models.
We empirically compared our protocol to the SPDZ2k protocol for various ML
circuits, including ResNet Training-Inference, Transformer Training-Inference,
and VGG16 Training-Inference. SPDZ2k dedicated around 30% of its online runtime
for tag computation. CompactTag speeds up this tag computation bottleneck by up
to 23x, resulting in up to 1.47x total online phase runtime speedups for
various ML workloads.