Ensuring the security of released large language models (LLMs) poses a
significant dilemma, as existing mechanisms either compromise ownership rights
or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP
to protect the ownership of released LLMs and prevent their abuse.
Specifically, TaylorMLP preserves the ownership of LLMs by transforming the
weights of LLMs into parameters of Taylor-series. Instead of releasing the
original weights, developers can release the Taylor-series parameters with
users, thereby ensuring the security of LLMs. Moreover, TaylorMLP can prevent
abuse of LLMs by adjusting the generation speed. It can induce low-speed token
generation for the protected LLMs by increasing the terms in the Taylor-series.
This intentional delay helps LLM developers prevent potential large-scale
unauthorized uses of their models. Empirical experiments across five datasets
and three LLM architectures demonstrate that TaylorMLP induces over 4x increase
in latency, producing the tokens precisely matched with original LLMs.
Subsequent defensive experiments further confirm that TaylorMLP effectively
prevents users from reconstructing the weight values based on downstream
datasets.