Membership Inference Attacks (MIAs) pose a significant privacy risk, as they
enable adversaries to determine whether a specific data point was included in
the training dataset of a model. While Machine Unlearning is primarily designed
as a privacy mechanism to efficiently remove private data from a machine
learning model without the need for full retraining, its impact on the
susceptibility of models to MIA remains an open question. In this study, we
systematically assess the vulnerability of models to MIA after applying
state-of-art Machine Unlearning algorithms. Our analysis spans four diverse
datasets (two from the image domain and two in tabular format), exploring how
different unlearning approaches influence the exposure of models to membership
inference. The findings highlight that while Machine Unlearning is not
inherently a countermeasure against MIA, the unlearning algorithm and data
characteristics can significantly affect a model's vulnerability. This work
provides essential insights into the interplay between Machine Unlearning and
MIAs, offering guidance for the design of privacy-preserving machine learning
systems.