Differential privacy is a strong notion for privacy that can be used to prove
formal guarantees, in terms of a privacy budget, $\epsilon$, about how much
information is leaked by a mechanism. However, implementations of
privacy-preserving machine learning often select large values of $\epsilon$ in
order to get acceptable utility of the model, with little understanding of the
impact of such choices on meaningful privacy. Moreover, in scenarios where
iterative learning procedures are used, differential privacy variants that
offer tighter analyses are used which appear to reduce the needed privacy
budget but present poorly understood trade-offs between privacy and utility. In
this paper, we quantify the impact of these choices on privacy in experiments
with logistic regression and neural network models. Our main finding is that
there is a huge gap between the upper bounds on privacy loss that can be
guaranteed, even with advanced mechanisms, and the effective privacy loss that
can be measured using current inference attacks. Current mechanisms for
differentially private machine learning rarely offer acceptable utility-privacy
trade-offs with guarantees for complex learning tasks: settings that provide
limited accuracy loss provide meaningless privacy guarantees, and settings that
provide strong privacy guarantees result in useless models. Code for the
experiments can be found here: https://github.com/bargavj/EvaluatingDPML