Machine learning algorithms, when applied to sensitive data, pose a potential
threat to privacy. A growing body of prior work has demonstrated that
membership inference attack (MIA) can disclose specific private information in
the training data to an attacker. Meanwhile, the algorithmic fairness of
machine learning has increasingly caught attention from both academia and
industry. Algorithmic fairness ensures that the machine learning models do not
discriminate a particular demographic group of individuals (e.g., black and
female people). Given that MIA is indeed a learning model, it raises a serious
concern if MIA ``fairly'' treats all groups of individuals equally. In other
words, whether a particular group is more vulnerable against MIA than the other
groups. This paper examines the algorithmic fairness issue in the context of
MIA and its defenses. First, for fairness evaluation, it formalizes the
notation of vulnerability disparity (VD) to quantify the difference of MIA
treatment on different demographic groups. Second, it evaluates VD on four
real-world datasets, and shows that VD indeed exists in these datasets. Third,
it examines the impacts of differential privacy, as a defense mechanism of MIA,
on VD. The results show that although DP brings significant change on VD, it
cannot eliminate VD completely. Therefore, fourth, it designs a new mitigation
algorithm named FAIRPICK to reduce VD. An extensive set of experimental results
demonstrate that FAIRPICK can effectively reduce VD for both with and without
the DP deployment.