These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Metaverse-enabled digital healthcare systems are expected to exploit an
unprecedented amount of personal health data, while ensuring that sensitive or
private information of individuals are not disclosed. Machine learning and
artificial intelligence (ML/AI) techniques can be widely utilized in metaverse
healthcare systems, such as virtual clinics and intelligent consultations. In
such scenarios, the key challenge is that data privacy laws might not allow
virtual clinics to share their medical data with other parties. Moreover,
clinical AI/ML models themselves carry extensive information about the medical
datasets, such that private attributes can be easily inferred by malicious
actors in the metaverse (if not rigorously privatized). In this paper, inspired
by the idea of "incognito mode", which has recently been developed as a
promising solution to safeguard metaverse users' privacy, we propose global
differential privacy for the distributed metaverse healthcare systems. In our
scheme, a randomized mechanism in the format of artificial "mix-up" noise is
applied to the federated clinical ML/AI models before sharing with other peers.
This way, we provide an adjustable level of distributed privacy against both
the malicious actors and honest-but curious metaverse servers. Our evaluations
on breast cancer Wisconsin dataset (BCWD) highlight the privacy-utility
trade-off (PUT) in terms of diagnosis accuracy and loss function for different
levels of privacy. We also compare our private scheme with the non-private
centralized setup in terms of diagnosis accuracy.