These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The hidden state threat model of differential privacy (DP) assumes that the
adversary has access only to the final trained machine learning (ML) model,
without seeing intermediate states during training. Current privacy analyses
under this model, however, are limited to convex optimization problems,
reducing their applicability to multi-layer neural networks, which are
essential in modern deep learning applications. Additionally, the most
successful applications of the hidden state privacy analyses in classification
tasks have been for logistic regression models. We demonstrate that it is
possible to privately train convex problems with privacy-utility trade-offs
comparable to those of one hidden-layer ReLU networks trained with DP
stochastic gradient descent (DP-SGD). We achieve this through a stochastic
approximation of a dual formulation of the ReLU minimization problem which
results in a strongly convex problem. This enables the use of existing hidden
state privacy analyses, providing accurate privacy bounds also for the noisy
cyclic mini-batch gradient descent (NoisyCGD) method with fixed disjoint
mini-batches. Our experiments on benchmark classification tasks show that
NoisyCGD can achieve privacy-utility trade-offs comparable to DP-SGD applied to
one-hidden-layer ReLU networks. Additionally, we provide theoretical utility
bounds that highlight the speed-ups gained through the convex approximation.