These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The widespread adoption of machine learning (ML) in various critical
applications, from healthcare to autonomous systems, has raised significant
concerns about privacy, accountability, and trustworthiness. To address these
concerns, recent research has focused on developing zero-knowledge machine
learning (zkML) techniques that enable the verification of various aspects of
ML models without revealing sensitive information. Recent advances in zkML have
substantially improved efficiency; however, these efforts have primarily
optimized the process of proving ML computations correct, often overlooking the
substantial overhead associated with verifying the necessary commitments to the
model and data. To address this gap, this paper introduces two new
Commit-and-Prove SNARK (CP-SNARK) constructions (Apollo and Artemis) that
effectively address the emerging challenge of commitment verification in zkML
pipelines. Apollo operates on KZG commitments and requires white-box use of the
underlying proof system, whereas Artemis is compatible with any homomorphic
polynomial commitment and only makes black-box use of the proof system. As a
result, Artemis is compatible with state-of-the-art proof systems without
trusted setup. We present the first implementation of these CP-SNARKs, evaluate
their performance on a diverse set of ML models, and show substantial
improvements over existing methods, achieving significant reductions in prover
costs and maintaining efficiency even for large-scale models. For example, for
the VGG model, we reduce the overhead associated with commitment checks from
11.5x to 1.2x. Our results suggest that these contributions can move zkML
towards practical deployment, particularly in scenarios involving large and
complex ML models.