Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which
allows a data-limited adversary with no knowledge of the training dataset to
clone the functionality of a target model, just by using black-box query
access. Such attacks are typically carried out by querying the target model
using inputs that are synthetically generated or sampled from a surrogate
dataset to construct a labeled dataset. The adversary can use this labeled
dataset to train a clone model, which achieves a classification accuracy
comparable to that of the target model. We propose "Adaptive Misinformation" to
defend against such model stealing attacks. We identify that all existing model
stealing attacks invariably query the target model with Out-Of-Distribution
(OOD) inputs. By selectively sending incorrect predictions for OOD queries, our
defense substantially degrades the accuracy of the attacker's clone model (by
up to 40%), while minimally impacting the accuracy (<0.5%) for benign users.
Compared to existing defenses, our defense has a significantly better security
vs accuracy trade-off and incurs minimal computational overhead.