These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Empirical defenses for machine learning privacy forgo the provable guarantees
of differential privacy in the hope of achieving higher utility while resisting
realistic adversaries. We identify severe pitfalls in existing empirical
privacy evaluations (based on membership inference attacks) that result in
misleading conclusions. In particular, we show that prior evaluations fail to
characterize the privacy leakage of the most vulnerable samples, use weak
attacks, and avoid comparisons with practical differential privacy baselines.
In 5 case studies of empirical privacy defenses, we find that prior evaluations
underestimate privacy leakage by an order of magnitude. Under our stronger
evaluation, none of the empirical defenses we study are competitive with a
properly tuned, high-utility DP-SGD baseline (with vacuous provable
guarantees).