Alternating direction method of multiplier (ADMM) is a popular method used to
design distributed versions of a machine learning algorithm, whereby local
computations are performed on local data with the output exchanged among
neighbors in an iterative fashion. During this iterative process the leakage of
data privacy arises. A differentially private ADMM was proposed in prior work
(Zhang & Zhu, 2017) where only the privacy loss of a single node during one
iteration was bounded, a method that makes it difficult to balance the tradeoff
between the utility attained through distributed computation and privacy
guarantees when considering the total privacy loss of all nodes over the entire
iterative process. We propose a perturbation method for ADMM where the
perturbed term is correlated with the penalty parameters; this is shown to
improve the utility and privacy simultaneously. The method is based on a
modified ADMM where each node independently determines its own penalty
parameter in every iteration and decouples it from the dual updating step size.
The condition for convergence of the modified ADMM and the lower bound on the
convergence rate are also derived.
外部データセット
Adult dataset from the UCI Machine Learning Repository