These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Transfer learning has been widely studied and gained increasing popularity to
improve the accuracy of machine learning models by transferring some knowledge
acquired in different training. However, no prior work has pointed out that
transfer learning can strengthen privacy attacks on machine learning models. In
this paper, we propose TransMIA (Transfer learning-based Membership Inference
Attacks), which use transfer learning to perform membership inference attacks
on the source model when the adversary is able to access the parameters of the
transferred model. In particular, we propose a transfer shadow training
technique, where an adversary employs the parameters of the transferred model
to construct shadow models, to significantly improve the performance of
membership inference when a limited amount of shadow training data is available
to the adversary. We evaluate our attacks using two real datasets, and show
that our attacks outperform the state-of-the-art that does not use our transfer
shadow training technique. We also compare four combinations of the
learning-based/entropy-based approach and the fine-tuning/freezing approach,
all of which employ our transfer shadow training technique. Then we examine the
performance of these four approaches based on the distributions of confidence
values, and discuss possible countermeasures against our attacks.