Membership Inference Attacks exploit the vulnerabilities of exposing models
trained on customer data to queries by an adversary. In a recently proposed
implementation of an auditing tool for measuring privacy leakage from sensitive
datasets, more refined aggregates like the Log-Loss scores are exposed for
simulating inference attacks as well as to assess the total privacy leakage
based on the adversary's predictions. In this paper, we prove that this
additional information enables the adversary to infer the membership of any
number of datapoints with full accuracy in a single query, causing complete
membership privacy breach. Our approach obviates any attack model training or
access to side knowledge with the adversary. Moreover, our algorithms are
agnostic to the model under attack and hence, enable perfect membership
inference even for models that do not memorize or overfit. In particular, our
observations provide insight into the extent of information leakage from
statistical aggregates and how they can be exploited.