Developing high-performance deep learning models is resource-intensive,
leading model owners to utilize Machine Learning as a Service (MLaaS) platforms
instead of publicly releasing their models. However, malicious users may
exploit query interfaces to execute model extraction attacks, reconstructing
the target model's functionality locally. While prior research has investigated
triggerable watermarking techniques for asserting ownership, existing methods
face significant challenges: (1) most approaches require additional training,
resulting in high overhead and limited flexibility, and (2) they often fail to
account for advanced attackers, leaving them vulnerable to adaptive attacks.
In this paper, we propose Neural Honeytrace, a robust plug-and-play
watermarking framework against model extraction attacks. We first formulate a
watermark transmission model from an information-theoretic perspective,
providing an interpretable account of the principles and limitations of
existing triggerable watermarking. Guided by the model, we further introduce:
(1) a similarity-based training-free watermarking method for plug-and-play and
flexible watermarking, and (2) a distribution-based multi-step watermark
information transmission strategy for robust watermarking. Comprehensive
experiments on four datasets demonstrate that Neural Honeytrace outperforms
previous methods in efficiency and resisting adaptive attacks. Neural
Honeytrace reduces the average number of samples required for a worst-case
t-Test-based copyright claim from 193,252 to 1,857 with zero training cost. The
code is available at https://github.com/NeurHT/NeurHT.