Differentially private (DP) machine learning has recently become popular. The
privacy loss of DP algorithms is commonly reported using
$(\varepsilon,\delta)$-DP. In this paper, we propose a numerical accountant for
evaluating the privacy loss for algorithms with continuous one dimensional
output. This accountant can be applied to the subsampled multidimensional
Gaussian mechanism which underlies the popular DP stochastic gradient descent.
The proposed method is based on a numerical approximation of an integral
formula which gives the exact $(\varepsilon,\delta)$-values. The approximation
is carried out by discretising the integral and by evaluating discrete
convolutions using the fast Fourier transform algorithm. We give both
theoretical error bounds and numerical error estimates for the approximation.
Experimental comparisons with state-of-the-art techniques demonstrate
significant improvements in bound tightness and/or computation time. Python
code for the method can be found in Github
(https://github.com/DPBayes/PLD-Accountant/).