These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With recent legislation on the right to be forgotten, machine unlearning has
emerged as a crucial research area. It facilitates the removal of a user's data
from federated trained machine learning models without the necessity for
retraining from scratch. However, current machine unlearning algorithms are
confronted with challenges of efficiency and validity. To address the above
issues, we propose a new framework, named Goldfish. It comprises four modules:
basic model, loss function, optimization, and extension. To address the
challenge of low validity in existing machine unlearning algorithms, we propose
a novel loss function. It takes into account the loss arising from the
discrepancy between predictions and actual labels in the remaining dataset.
Simultaneously, it takes into consideration the bias of predicted results on
the removed dataset. Moreover, it accounts for the confidence level of
predicted results. Additionally, to enhance efficiency, we adopt knowledge a
distillation technique in the basic model and introduce an optimization module
that encompasses the early termination mechanism guided by empirical risk and
the data partition mechanism. Furthermore, to bolster the robustness of the
aggregated model, we propose an extension module that incorporates a mechanism
using adaptive distillation temperature to address the heterogeneity of user
local data and a mechanism using adaptive weight to handle the variety in the
quality of uploaded models. Finally, we conduct comprehensive experiments to
illustrate the effectiveness of proposed approach.