Differential privacy (DP) is a widely used notion for reasoning about privacy
when publishing aggregate data. In this paper, we observe that certain DP
mechanisms are amenable to a posteriori privacy analysis that exploits the fact
that some outputs leak less information about the input database than others.
To exploit this phenomenon, we introduce output differential privacy (ODP) and
a new composition experiment, and leverage these new constructs to obtain
significant privacy budget savings and improved privacy-utility tradeoffs under
composition. All of this comes at no cost in terms of privacy; we do not weaken
the privacy guarantee.
To demonstrate the applicability of our a posteriori privacy analysis
techniques, we analyze two well-known mechanisms: the Sparse Vector Technique
and the Propose-Test-Release framework. We then show how our techniques can be
used to save privacy budget in more general contexts: when a differentially
private iterative mechanism terminates before its maximal number of iterations
is reached, and when the output of a DP mechanism provides unsatisfactory
utility. Examples of the former include iterative optimization algorithms,
whereas examples of the latter include training a machine learning model with a
large generalization error. Our techniques can be applied beyond the current
paper to refine the analysis of existing DP mechanisms or guide the design of
future mechanisms.