We present DarkneTZ, a framework that uses an edge device's Trusted Execution
Environment (TEE) in conjunction with model partitioning to limit the attack
surface against Deep Neural Networks (DNNs). Increasingly, edge devices
(smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a
variety of applications. This trend comes with privacy risks as models can leak
information about their training data through effective membership inference
attacks (MIAs). We evaluate the performance of DarkneTZ, including CPU
execution time, memory usage, and accurate power consumption, using two small
and six large image classification models. Due to the limited memory of the
edge device's TEE, we partition model layers into more sensitive layers (to be
executed inside the device TEE), and a set of layers to be executed in the
untrusted part of the operating system. Our results show that even if a single
layer is hidden, we can provide reliable model privacy and defend against state
of the art MIAs, with only 3% performance overhead. When fully utilizing the
TEE, DarkneTZ provides model protections with up to 10% overhead.