These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Quantizing machine learning models has demonstrated its effectiveness in
lowering memory and inference costs while maintaining performance levels
comparable to the original models. In this work, we investigate the impact of
quantization procedures on the privacy of data-driven models, specifically
focusing on their vulnerability to membership inference attacks. We derive an
asymptotic theoretical analysis of Membership Inference Security (MIS),
characterizing the privacy implications of quantized algorithm weights against
the most powerful (and possibly unknown) attacks. Building on these theoretical
insights, we propose a novel methodology to empirically assess and rank the
privacy levels of various quantization procedures. Using synthetic datasets, we
demonstrate the effectiveness of our approach in assessing the MIS of different
quantizers. Furthermore, we explore the trade-off between privacy and
performance using real-world data and models in the context of molecular
modeling.