TOP 文献データベース GOAT: GPU Outsourcing of Deep Learning Training With Asynchronous Probabilistic Integrity Verification Inside Trusted Execution Environment
arxiv
GOAT: GPU Outsourcing of Deep Learning Training With Asynchronous Probabilistic Integrity Verification Inside Trusted Execution Environment
Machine learning models based on Deep Neural Networks (DNNs) are increasingly
deployed in a wide range of applications ranging from self-driving cars to
COVID-19 treatment discovery. To support the computational power necessary to
learn a DNN, cloud environments with dedicated hardware support have emerged as
critical infrastructure. However, there are many integrity challenges
associated with outsourcing computation. Various approaches have been developed
to address these challenges, building on trusted execution environments (TEE).
Yet, no existing approach scales up to support realistic integrity-preserving
DNN model training for heavy workloads (deep architectures and millions of
training examples) without sustaining a significant performance hit. To
mitigate the time gap between pure TEE (full integrity) and pure GPU (no
integrity), we combine random verification of selected computation steps with
systematic adjustments of DNN hyper-parameters (e.g., a narrow gradient
clipping range), hence limiting the attacker's ability to shift the model
parameters significantly provided that the step is not selected for
verification during its training phase. Experimental results show the new
approach achieves 2X to 20X performance improvement over pure TEE based
solution while guaranteeing a very high probability of integrity (e.g., 0.999)
with respect to state-of-the-art DNN backdoor attacks.