Differential privacy has been an exceptionally successful concept when it
comes to providing provable security guarantees for classical computations.
More recently, the concept was generalized to quantum computations. While
classical computations are essentially noiseless and differential privacy is
often achieved by artificially adding noise, near-term quantum computers are
inherently noisy and it was observed that this leads to natural differential
privacy as a feature.
In this work we discuss quantum differential privacy in an information
theoretic framework by casting it as a quantum divergence. A main advantage of
this approach is that differential privacy becomes a property solely based on
the output states of the computation, without the need to check it for every
measurement. This leads to simpler proofs and generalized statements of its
properties as well as several new bounds for both, general and specific, noise
models. In particular, these include common representations of quantum circuits
and quantum machine learning concepts. Here, we focus on the difference in the
amount of noise required to achieve certain levels of differential privacy
versus the amount that would make any computation useless. Finally, we also
generalize the classical concepts of local differential privacy, Renyi
differential privacy and the hypothesis testing interpretation to the quantum
setting, providing several new properties and insights.