These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Fine-tuning large language models (LLMs) raises privacy concerns due to the
risk of exposing sensitive training data. Federated learning (FL) mitigates
this risk by keeping training samples on local devices, while facing the
following problems in privacy-preserving federated fine-tuning. (i) Recent
studies show that adversaries can still infer private information in FL. (ii)
LLM parameters are shared publicly during federated fine-tuning, while
developers are often reluctant to disclose these parameters, posing further
security challenges. (iii) Existing works focus on secure inference of LLMs but
do not consider privacy-preserving fine-tuning. Inspired by the above problems,
we propose PriFFT, a privacy-preserving federated fine-tuning mechanism, to
protect both the model parameters and users' privacy. Due to considerable LLM
parameters, we present hybrid secret sharing combining arithmetic secret
sharing (ASS) and function secret sharing (FSS) to build secure operations and
implement secure layers and activation for privacy-preserving fine-tuning. To
improve the efficiency of privacy-preserving federated fine-tuning of LLMs, we
optimize several secure computation protocols based on FSS, including
reciprocal calculation, tensor products, natural exponentiation, softmax,
sigmoid, hyperbolic tangent, and dropout. The hybrid secret sharing enables
PriFFT to apply our optimized FSS protocols while combining ASS protocols to
support complex computation without extra communication. The optimized
protocols reduce execution time up to 62.5% and communication overhead up to
70.7% compared to existing protocols. Besides, PriFFT reduces execution time
and communication overhead in privacy-preserving fine-tuning up to 59.1%$ and
77.0%$ without accuracy drop compared to the existing secret sharing methods.