TOP Literature Database Unveiling the Vulnerability of Private Fine-Tuning in Split-Based Frameworks for Large Language Models: A Bidirectionally Enhanced Attack
arxiv
Unveiling the Vulnerability of Private Fine-Tuning in Split-Based Frameworks for Large Language Models: A Bidirectionally Enhanced Attack
AI Security Portal bot
Information in the literature database is collected automatically.
These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recent advancements in pre-trained large language models (LLMs) have
significantly influenced various domains. Adapting these models for specific
tasks often involves fine-tuning (FT) with private, domain-specific data.
However, privacy concerns keep this data undisclosed, and the computational
demands for deploying LLMs pose challenges for resource-limited data holders.
This has sparked interest in split learning (SL), a Model-as-a-Service (MaaS)
paradigm that divides LLMs into smaller segments for distributed training and
deployment, transmitting only intermediate activations instead of raw data. SL
has garnered substantial interest in both industry and academia as it aims to
balance user data privacy, model ownership, and resource challenges in the
private fine-tuning of LLMs. Despite its privacy claims, this paper reveals
significant vulnerabilities arising from the combination of SL and LLM-FT: the
Not-too-far property of fine-tuning and the auto-regressive nature of LLMs.
Exploiting these vulnerabilities, we propose Bidirectional Semi-white-box
Reconstruction (BiSR), the first data reconstruction attack (DRA) designed to
target both the forward and backward propagation processes of SL. BiSR utilizes
pre-trained weights as prior knowledge, combining a learning-based attack with
a bidirectional optimization-based approach for highly effective data
reconstruction. Additionally, it incorporates a Noise-adaptive Mixture of
Experts (NaMoE) model to enhance reconstruction performance under perturbation.
We conducted systematic experiments on various mainstream LLMs and different
setups, empirically demonstrating BiSR's state-of-the-art performance.
Furthermore, we thoroughly examined three representative defense mechanisms,
showcasing our method's capability to reconstruct private data even in the
presence of these defenses.