Deep learning model developers often use cloud GPU resources to experiment
with large data and models that need expensive setups. However, this practice
raises privacy concerns. Adversaries may be interested in: 1) personally
identifiable information or objects encoded in the training images, and 2) the
models trained with sensitive data to launch model-based attacks. Learning deep
neural networks (DNN) from encrypted data is still impractical due to the large
training data and the expensive learning process. A few recent studies have
tried to provide efficient, practical solutions to protect data privacy in
outsourced deep-learning. However, we find out that they are vulnerable under
certain attacks. In this paper, we specifically identify two types of unique
attacks on outsourced deep-learning: 1) the visual re-identification attack on
the training data, and 2) the class membership attack on the learned models,
which can break existing privacy-preserving solutions. We develop an image
disguising approach to address these attacks and design a suite of methods to
evaluate the levels of attack resilience for a privacy-preserving solution for
outsourced deep learning. The experimental results show that our
image-disguising mechanisms can provide a high level of protection against the
two attacks while still generating high-quality DNN models for image
classification.