We propose a novel and practical privacy notion called $f$-Membership
Inference Privacy ($f$-MIP), which explicitly considers the capabilities of
realistic adversaries under the membership inference attack threat model.
Consequently, $f$-MIP offers interpretable privacy guarantees and improved
utility (e.g., better classification accuracy). In particular, we derive a
parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian
Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood
ratio-based membership inference attacks on stochastic gradient descent (SGD).
Our analysis highlights that models trained with standard SGD already offer an
elementary level of MIP. Additionally, we show how $f$-MIP can be amplified by
adding noise to gradient updates. Our analysis further yields an analytical
membership inference attack that offers two distinct advantages over previous
approaches. First, unlike existing state-of-the-art attacks that require
training hundreds of shadow models, our attack does not require any shadow
model. Second, our analytical attack enables straightforward auditing of our
privacy notion $f$-MIP. Finally, we quantify how various hyperparameters (e.g.,
batch size, number of model parameters) and specific data characteristics
determine an attacker's ability to accurately infer a point's membership in the
training set. We demonstrate the effectiveness of our method on models trained
on vision and tabular datasets.