These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The right to be forgotten requires the removal or "unlearning" of a user's
data from machine learning models. However, in the context of Machine Learning
as a Service (MLaaS), retraining a model from scratch to fulfill the unlearning
request is impractical due to the lack of training data on the service
provider's side (the server). Furthermore, approximate unlearning further
embraces a complex trade-off between utility (model performance) and privacy
(unlearning performance). In this paper, we try to explore the potential
threats posed by unlearning services in MLaaS, specifically over-unlearning,
where more information is unlearned than expected. We propose two strategies
that leverage over-unlearning to measure the impact on the trade-off balancing,
under black-box access settings, in which the existing machine unlearning
attacks are not applicable. The effectiveness of these strategies is evaluated
through extensive experiments on benchmark datasets, across various model
architectures and representative unlearning approaches. Results indicate
significant potential for both strategies to undermine model efficacy in
unlearning scenarios. This study uncovers an underexplored gap between
unlearning and contemporary MLaaS, highlighting the need for careful
considerations in balancing data unlearning, model utility, and security.