These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In recent years, Large Language Models (LLMs) have demonstrated remarkable
abilities in various natural language processing tasks. However, adapting these
models to specialized domains using private datasets stored on
resource-constrained edge devices, such as smartphones and personal computers,
remains challenging due to significant privacy concerns and limited
computational resources. Existing model adaptation methods either compromise
data privacy by requiring data transmission or jeopardize model privacy by
exposing proprietary LLM parameters. To address these challenges, we propose
Prada, a novel privacy-preserving and efficient black-box LLM adaptation system
using private on-device datasets. Prada employs a lightweight proxy model
fine-tuned with Low-Rank Adaptation (LoRA) locally on user devices. During
inference, Prada leverages the logits offset, i.e., difference in outputs
between the base and adapted proxy models, to iteratively refine outputs from a
remote black-box LLM. This offset-based adaptation approach preserves both data
privacy and model privacy, as there is no need to share sensitive data or
proprietary model parameters. Furthermore, we incorporate speculative decoding
to further speed up the inference process of Prada, making the system
practically deployable on bandwidth-constrained edge devices, enabling a more
practical deployment of Prada. Extensive experiments on various downstream
tasks demonstrate that Prada achieves performance comparable to centralized
fine-tuning methods while significantly reducing computational overhead by up
to 60% and communication costs by up to 80%.