Machine Learning as a Service (MLaaS) allows clients with limited resources
to outsource their expensive ML tasks to powerful servers. Despite the huge
benefits, current MLaaS solutions still lack strong assurances on: 1) service
correctness (i.e., whether the MLaaS works as expected); 2) trustworthy
accounting (i.e., whether the bill for the MLaaS resource consumption is
correctly accounted); 3) fair payment (i.e., whether a client gets the entire
MLaaS result before making the payment). Without these assurances, unfaithful
service providers can return improperly-executed ML task results or partially
trained ML models while asking for over-claimed rewards. Moreover, it is hard
to argue for wide adoption of MLaaS to both the client and the service
provider, especially in the open market without a trusted third party. In this
paper, we present VeriML, a novel and efficient framework to bring integrity
assurances and fair payments to MLaaS. With VeriML, clients can be assured that
ML tasks are correctly executed on an untrusted server and the resource
consumption claimed by the service provider equals to the actual workload. We
strategically use succinct non-interactive arguments of knowledge (SNARK) on
randomly-selected iterations during the ML training phase for efficiency with
tunable probabilistic assurance. We also develop multiple ML-specific
optimizations to the arithmetic circuit required by SNARK. Our system
implements six common algorithms: linear regression, logistic regression,
neural network, support vector machine, Kmeans and decision tree. The
experimental results have validated the practical performance of VeriML.