These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Membership inference attacks (MIAs) are used to test practical privacy of
machine learning models. MIAs complement formal guarantees from differential
privacy (DP) under a more realistic adversary model. We analyse MIA
vulnerability of fine-tuned neural networks both empirically and theoretically,
the latter using a simplified model of fine-tuning. We show that the
vulnerability of non-DP models when measured as the attacker advantage at a
fixed false positive rate reduces according to a simple power law as the number
of examples per class increases. A similar power-law applies even for the most
vulnerable points, but the dataset size needed for adequate protection of the
most vulnerable points is very large.