These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Many Internet-of-Things (IoT) devices rely on cloud computation resources to
perform machine learning inferences. This is expensive and may raise privacy
concerns for users. Consumers of these devices often have hardware such as
gaming consoles and PCs with graphics accelerators that are capable of
performing these computations, which may be left idle for significant periods
of time. While this presents a compelling potential alternative to cloud
offloading, concerns about the integrity of inferences, the confidentiality of
model parameters, and the privacy of users' data mean that device vendors may
be hesitant to offload their inferences to a platform managed by another
manufacturer.
We propose VeriSplit, a framework for offloading machine learning inferences
to locally-available devices that address these concerns. We introduce masking
techniques to protect data privacy and model confidentiality, and a
commitment-based verification protocol to address integrity. Unlike much prior
work aimed at addressing these issues, our approach does not rely on
computation over finite field elements, which may interfere with floating-point
computation supports on hardware accelerators and require modification to
existing models. We implemented a prototype of VeriSplit and our evaluation
results show that, compared to performing computation locally, our secure and
private offloading solution can reduce inference latency by 28%--83%.