These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated Learning (FL) is a machine learning (ML) approach that enables
multiple decentralized devices or edge servers to collaboratively train a
shared model without exchanging raw data. During the training and sharing of
model updates between clients and servers, data and models are susceptible to
different data-poisoning attacks.
In this study, our motivation is to explore the severity of data poisoning
attacks in the computer network domain because they are easy to implement but
difficult to detect. We considered two types of data-poisoning attacks, label
flipping (LF) and feature poisoning (FP), and applied them with a novel
approach. In LF, we randomly flipped the labels of benign data and trained the
model on the manipulated data. For FP, we randomly manipulated the highly
contributing features determined using the Random Forest algorithm. The
datasets used in this experiment were CIC and UNSW related to computer
networks. We generated adversarial samples using the two attacks mentioned
above, which were applied to a small percentage of datasets. Subsequently, we
trained and tested the accuracy of the model on adversarial datasets. We
recorded the results for both benign and manipulated datasets and observed
significant differences between the accuracy of the models on different
datasets. From the experimental results, it is evident that the LF attack
failed, whereas the FP attack showed effective results, which proved its
significance in fooling a server. With a 1% LF attack on the CIC, the accuracy
was approximately 0.0428 and the ASR was 0.9564; hence, the attack is easily
detectable, while with a 1% FP attack, the accuracy and ASR were both
approximately 0.9600, hence, FP attacks are difficult to detect. We repeated
the experiment with different poisoning percentages.