These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Natural Language Processing (NLP) operations, such as semantic sentiment
analysis and text synthesis, often raise privacy concerns and demand
significant on-device computational resources. Centralized learning (CL) on the
edge provides an energy-efficient alternative but requires collecting raw data,
compromising user privacy. While federated learning (FL) enhances privacy, it
imposes high computational energy demands on resource-constrained devices. This
study provides insights into deploying privacy-preserving, energy-efficient NLP
models on edge devices. We introduce semantic split learning (SL) as an
energy-efficient, privacy-preserving tiny machine learning (TinyML) framework
and compare it to FL and CL in the presence of Rayleigh fading and additive
noise. Our results show that SL significantly reduces computational power and
CO2 emissions while enhancing privacy, as evidenced by a fourfold increase in
reconstruction error compared to FL and nearly eighteen times that of CL. In
contrast, FL offers a balanced trade-off between privacy and efficiency. Our
code is available for replication at our GitHub repository:
https://github.com/AhmedRadwan02/TinyEco2AI-NLP.