Machine learning (ML) models trained by differentially private stochastic
gradient descent (DP-SGD) have much lower utility than the non-private ones. To
mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD)
to train ML models with differential privacy (DP) guarantees. At the core of
DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used
in the Gaussian mechanism. Under the same amount of noise used in the Gaussian
mechanism, DP-LSSGD attains the same DP guarantee, but in practice, DP-LSSGD
makes training both convex and nonconvex ML models more stable and enables the
trained models to generalize better. The proposed algorithm is simple to
implement and the extra computational complexity and memory overhead compared
with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of
ML models, including DNNs. The code is available at
\url{https://github.com/BaoWangMath/DP-LSSGD}.