Membership inference determines, given a sample and trained parameters of a
machine learning model, whether the sample was part of the training set. In
this paper, we derive the optimal strategy for membership inference with a few
assumptions on the distribution of the parameters. We show that optimal attacks
only depend on the loss function, and thus black-box attacks are as good as
white-box attacks. As the optimal strategy is not tractable, we provide
approximations of it leading to several inference methods, and show that
existing membership inference methods are coarser approximations of this
optimal strategy. Our membership attacks outperform the state of the art in
various settings, ranging from a simple logistic regression to more complex
architectures and datasets, such as ResNet-101 and Imagenet.