These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
As on-device LLMs(e.g., Apple on-device Intelligence) are widely adopted to
reduce network dependency, improve privacy, and enhance responsiveness,
verifying the legitimacy of models running on local devices becomes critical.
Existing attestation techniques are not suitable for billion-parameter Large
Language Models (LLMs), struggling to remain both time- and memory-efficient
while addressing emerging threats in the LLM era. In this paper, we present
AttestLLM, the first-of-its-kind attestation framework to protect the
hardware-level intellectual property (IP) of device vendors by ensuring that
only authorized LLMs can execute on target platforms. AttestLLM leverages an
algorithm/software/hardware co-design approach to embed robust watermarking
signatures onto the activation distributions of LLM building blocks. It also
optimizes the attestation protocol within the Trusted Execution Environment
(TEE), providing efficient verification without compromising inference
throughput. Extensive proof-of-concept evaluations on LLMs from Llama, Qwen,
and Phi families for on-device use cases demonstrate AttestLLM's attestation
reliability, fidelity, and efficiency. Furthermore, AttestLLM enforces model
legitimacy and exhibits resilience against model replacement and forgery
attacks.