These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Smart healthcare systems are gaining popularity with the rapid development of
intelligent sensors, the Internet of Things (IoT) applications and services,
and wireless communications. However, at the same time, several vulnerabilities
and adversarial attacks make it challenging for a safe and secure smart
healthcare system from a security point of view. Machine learning has been used
widely to develop suitable models to predict and mitigate attacks. Still, the
attacks could trick the machine learning models and misclassify outputs
generated by the model. As a result, it leads to incorrect decisions, for
example, false disease detection and wrong treatment plans for patients. In
this paper, we address the type of adversarial attacks and their impact on
smart healthcare systems. We propose a model to examine how adversarial attacks
impact machine learning classifiers. To test the model, we use a medical image
dataset. Our model can classify medical images with high accuracy. We then
attacked the model with a Fast Gradient Sign Method attack (FGSM) to cause the
model to predict the images and misclassify them inaccurately. Using transfer
learning, we train a VGG-19 model with the medical dataset and later implement
the FGSM to the Convolutional Neural Network (CNN) to examine the significant
impact it causes on the performance and accuracy of the machine learning model.
Our results demonstrate that the adversarial attack misclassifies the images,
causing the model's accuracy rate to drop from 88% to 11%.