These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
We present a framework to statistically audit the privacy guarantee conferred
by a differentially private machine learner in practice. While previous works
have taken steps toward evaluating privacy loss through poisoning attacks or
membership inference, they have been tailored to specific models or have
demonstrated low statistical power. Our work develops a general methodology to
empirically evaluate the privacy of differentially private machine learning
implementations, combining improved privacy search and verification methods
with a toolkit of influence-based poisoning attacks. We demonstrate
significantly improved auditing power over previous approaches on a variety of
models including logistic regression, Naive Bayes, and random forest. Our
method can be used to detect privacy violations due to implementation errors or
misuse. When violations are not present, it can aid in understanding the amount
of information that can be leaked from a given dataset, algorithm, and privacy
specification.