These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Sketching is one of the most fundamental tools in large-scale machine
learning. It enables runtime and memory saving via randomly compressing the
original large problem into lower dimensions. In this paper, we propose a novel
sketching scheme for the first order method in large-scale distributed learning
setting, such that the communication costs between distributed agents are saved
while the convergence of the algorithms is still guaranteed. Given gradient
information in a high dimension $d$, the agent passes the compressed
information processed by a sketching matrix $R\in \mathbb{R}^{s\times d}$ with
$s\ll d$, and the receiver de-compressed via the de-sketching matrix $R^\top$
to ``recover'' the information in original dimension. Using such a framework,
we develop algorithms for federated learning with lower communication costs.
However, such random sketching does not protect the privacy of local data
directly. We show that the gradient leakage problem still exists after applying
the sketching technique by presenting a specific gradient attack method. As a
remedy, we prove rigorously that the algorithm will be differentially private
by adding additional random noises in gradient information, which results in a
both communication-efficient and differentially private first order approach
for federated learning tasks. Our sketching scheme can be further generalized
to other learning settings and might be of independent interest itself.