These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning models deployed as a service (MLaaS) are susceptible to
model stealing attacks, where an adversary attempts to steal the model within a
restricted access framework. While existing attacks demonstrate near-perfect
clone-model performance using softmax predictions of the classification
network, most of the APIs allow access to only the top-1 labels. In this work,
we show that it is indeed possible to steal Machine Learning models by
accessing only top-1 predictions (Hard Label setting) as well, without access
to model gradients (Black-Box setting) or even the training dataset (Data-Free
setting) within a low query budget. We propose a novel GAN-based framework that
trains the student and generator in tandem to steal the model effectively while
overcoming the challenge of the hard label setting by utilizing gradients of
the clone network as a proxy to the victim's gradients. We propose to overcome
the large query costs associated with a typical Data-Free setting by utilizing
publicly available (potentially unrelated) datasets as a weak image prior. We
additionally show that even in the absence of such data, it is possible to
achieve state-of-the-art results within a low query budget using synthetically
crafted samples. We are the first to demonstrate the scalability of Model
Stealing in a restricted access setting on a 100 class dataset as well.