As machine learning becomes a practice and commodity, numerous cloud-based
services and frameworks are provided to help customers develop and deploy
machine learning applications. While it is prevalent to outsource model
training and serving tasks in the cloud, it is important to protect the privacy
of sensitive samples in the training dataset and prevent information leakage to
untrusted third parties. Past work have shown that a malicious machine learning
service provider or end user can easily extract critical information about the
training samples, from the model parameters or even just model outputs.
In this paper, we propose a novel and generic methodology to preserve the
privacy of training data in machine learning applications. Specifically we
introduce an obfuscate function and apply it to the training data before
feeding them to the model training task. This function adds random noise to
existing samples, or augments the dataset with new samples. By doing so
sensitive information about the properties of individual samples, or
statistical properties of a group of samples, is hidden. Meanwhile the model
trained from the obfuscated dataset can still achieve high accuracy. With this
approach, the customers can safely disclose the data or models to third-party
providers or end users without the need to worry about data privacy. Our
experiments show that this approach can effective defeat four existing types of
machine learning privacy attacks at negligible accuracy cost.