The increasing availability of healthcare data requires accurate analysis of
disease diagnosis, progression, and realtime monitoring to provide improved
treatments to the patients. In this context, Machine Learning (ML) models are
used to extract valuable features and insights from high-dimensional and
heterogeneous healthcare data to detect different diseases and patient
activities in a Smart Healthcare System (SHS). However, recent researches show
that ML models used in different application domains are vulnerable to
adversarial attacks. In this paper, we introduce a new type of adversarial
attacks to exploit the ML classifiers used in a SHS. We consider an adversary
who has partial knowledge of data distribution, SHS model, and ML algorithm to
perform both targeted and untargeted attacks. Employing these adversarial
capabilities, we manipulate medical device readings to alter patient status
(disease-affected, normal condition, activities, etc.) in the outcome of the
SHS. Our attack utilizes five different adversarial ML algorithms (HopSkipJump,
Fast Gradient Method, Crafting Decision Tree, Carlini & Wagner, Zeroth Order
Optimization) to perform different malicious activities (e.g., data poisoning,
misclassify outputs, etc.) on a SHS. Moreover, based on the training and
testing phase capabilities of an adversary, we perform white box and black box
attacks on a SHS. We evaluate the performance of our work in different SHS
settings and medical devices. Our extensive evaluation shows that our proposed
adversarial attack can significantly degrade the performance of a ML-based SHS
in detecting diseases and normal activities of the patients correctly, which
eventually leads to erroneous treatment.