These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The fast development of large language models (LLMs) and popularization of
cloud computing have led to increasing concerns on privacy safeguarding and
data security of cross-cloud model deployment and training as the key
challenges. We present a new framework for addressing these issues along with
enabling privacy preserving collaboration on training between distributed
clouds based on federated learning. Our mechanism encompasses cutting-edge
cryptographic primitives, dynamic model aggregation techniques, and cross-cloud
data harmonization solutions to enhance security, efficiency, and scalability
to the traditional federated learning paradigm. Furthermore, we proposed a
hybrid aggregation scheme to mitigate the threat of Data Leakage and to
optimize the aggregation of model updates, thus achieving substantial
enhancement on the model effectiveness and stability. Experimental results
demonstrate that the training efficiency, privacy protection, and model
accuracy of the proposed model compare favorably to those of the traditional
federated learning method.