These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
This paper focuses on the challenge of machine unlearning, aiming to remove
the influence of specific training data on machine learning models.
Traditionally, the development of unlearning algorithms runs parallel with that
of membership inference attacks (MIA), a type of privacy threat to determine
whether a data instance was used for training. However, the two strands are
intimately connected: one can view machine unlearning through the lens of MIA
success with respect to removed data. Recognizing this connection, we propose a
game-theoretic framework that integrates MIAs into the design of unlearning
algorithms. Specifically, we model the unlearning problem as a Stackelberg game
in which an unlearner strives to unlearn specific training data from a model,
while an auditor employs MIAs to detect the traces of the ostensibly removed
data. Adopting this adversarial perspective allows the utilization of new
attack advancements, facilitating the design of unlearning algorithms. Our
framework stands out in two ways. First, it takes an adversarial approach and
proactively incorporates the attacks into the design of unlearning algorithms.
Secondly, it uses implicit differentiation to obtain the gradients that limit
the attacker's success, thus benefiting the process of unlearning. We present
empirical results to demonstrate the effectiveness of the proposed approach for
machine unlearning.