Machine learning models benefit from large and diverse datasets. Using such
datasets, however, often requires trusting a centralized data aggregator. For
sensitive applications like healthcare and finance this is undesirable as it
could compromise patient privacy or divulge trade secrets. Recent advances in
secure and privacy-preserving computation, including trusted hardware enclaves
and differential privacy, offer a way for mutually distrusting parties to
efficiently train a machine learning model without revealing the training data.
In this work, we introduce Myelin, a deep learning framework which combines
these privacy-preservation primitives, and use it to establish a baseline level
of performance for fully private machine learning.