Secure multiparty computations enable the distribution of so-called shares of
sensitive data to multiple parties such that the multiple parties can
effectively process the data while being unable to glean much information about
the data (at least not without collusion among all parties to put back together
all the shares). Thus, the parties may conspire to send all their processed
results to a trusted third party (perhaps the data provider) at the conclusion
of the computations, with only the trusted third party being able to view the
final results. Secure multiparty computations for privacy-preserving
machine-learning turn out to be possible using solely standard floating-point
arithmetic, at least with a carefully controlled leakage of information less
than the loss of accuracy due to roundoff, all backed by rigorous mathematical
proofs of worst-case bounds on information loss and numerical stability in
finite-precision arithmetic. Numerical examples illustrate the high performance
attained on commodity off-the-shelf hardware for generalized linear models,
including ordinary linear least-squares regression, binary and multinomial
logistic regression, probit regression, and Poisson regression.