These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
As traditional centralized learning networks (CLNs) are facing increasing
challenges in terms of privacy preservation, communication overheads, and
scalability, federated learning networks (FLNs) have been proposed as a
promising alternative paradigm to support the training of machine learning (ML)
models. In contrast to the centralized data storage and processing in CLNs,
FLNs exploit a number of edge devices (EDs) to store data and perform training
distributively. In this way, the EDs in FLNs can keep training data locally,
which preserves privacy and reduces communication overheads. However, since the
model training within FLNs relies on the contribution of all EDs, the training
process can be disrupted if some of the EDs upload incorrect or falsified
training results, i.e., poisoning attacks. In this paper, we review the
vulnerabilities of FLNs, and particularly give an overview of poisoning attacks
and mainstream countermeasures. Nevertheless, the existing countermeasures can
only provide passive protection and fail to consider the training fees paid for
the contributions of the EDs, resulting in a unnecessarily high training cost.
Hence, we present a smart security enhancement framework for FLNs. In
particular, a verify-before-aggregate (VBA) procedure is developed to identify
and remove the non-benign training results from the EDs. Afterward, deep
reinforcement learning (DRL) is applied to learn the behaving patterns of the
EDs and to actively select the EDs that can provide benign training results and
charge low training fees. Simulation results reveal that the proposed framework
can protect FLNs effectively and efficiently.