Privacy is crucial in many applications of machine learning. Legal, ethical
and societal issues restrict the sharing of sensitive data making it difficult
to learn from datasets that are partitioned between many parties. One important
instance of such a distributed setting arises when information about each
record in the dataset is held by different data owners (the design matrix is
"vertically-partitioned").
In this setting few approaches exist for private data sharing for the
purposes of statistical estimation and the classical setup of differential
privacy with a "trusted curator" preparing the data does not apply. We work
with the notion of $(\epsilon,\delta)$-distributed differential privacy which
extends single-party differential privacy to the distributed,
vertically-partitioned case. We propose PriDE, a scalable framework for
distributed estimation where each party communicates perturbed random
projections of their locally held features ensuring
$(\epsilon,\delta)$-distributed differential privacy is preserved. For
$\ell_2$-penalized supervised learning problems PriDE has bounded estimation
error compared with the optimal estimates obtained without privacy constraints
in the non-distributed setting. We confirm this empirically on real world and
synthetic datasets.