Machine learning models' goal is to make correct predictions for specific
tasks by learning important properties and patterns from data. By doing so,
there is a chance that the model learns properties that are unrelated to its
primary task. Property Inference Attacks exploit this and aim to infer from a
given model (\ie the target model) properties about the training dataset
seemingly unrelated to the model's primary goal. If the training data is
sensitive, such an attack could lead to privacy leakage. This paper
investigates the influence of the target model's complexity on the accuracy of
this type of attack, focusing on convolutional neural network classifiers. We
perform attacks on models that are trained on facial images to predict whether
someone's mouth is open. Our attacks' goal is to infer whether the training
dataset is balanced gender-wise. Our findings reveal that the risk of a privacy
breach is present independently of the target model's complexity: for all
studied architectures, the attack's accuracy is clearly over the baseline. We
discuss the implication of the property inference on personal data in the light
of Data Protection Regulations and Guidelines.