These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The widespread deployment of machine learning (ML) is raising serious
concerns on protecting the privacy of users who contributed to the collection
of training data. Differential privacy (DP) is rapidly gaining momentum in the
industry as a practical standard for privacy protection. Despite DP's
importance, however, little has been explored within the computer systems
community regarding the implication of this emerging ML algorithm on system
designs. In this work, we conduct a detailed workload characterization on a
state-of-the-art differentially private ML training algorithm named DP-SGD. We
uncover several unique properties of DP-SGD (e.g., its high memory capacity and
computation requirements vs. non-private ML), root-causing its key bottlenecks.
Based on our analysis, we propose an accelerator for differentially private ML
named DiVa, which provides a significant improvement in compute utilization,
leading to 2.6x higher energy-efficiency vs. conventional systolic arrays.