These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning (ML) can help fight pandemics like COVID-19 by enabling
rapid screening of large volumes of images. To perform data analysis while
maintaining patient privacy, we create ML models that satisfy Differential
Privacy (DP). Previous works exploring private COVID-19 models are in part
based on small datasets, provide weaker or unclear privacy guarantees, and do
not investigate practical privacy. We suggest improvements to address these
open gaps. We account for inherent class imbalances and evaluate the
utility-privacy trade-off more extensively and over stricter privacy budgets.
Our evaluation is supported by empirically estimating practical privacy through
black-box Membership Inference Attacks (MIAs). The introduced DP should help
limit leakage threats posed by MIAs, and our practical analysis is the first to
test this hypothesis on the COVID-19 classification task. Our results indicate
that needed privacy levels might differ based on the task-dependent practical
threat from MIAs. The results further suggest that with increasing DP
guarantees, empirical privacy leakage only improves marginally, and DP
therefore appears to have a limited impact on practical MIA defense. Our
findings identify possibilities for better utility-privacy trade-offs, and we
believe that empirical attack-specific privacy estimation can play a vital role
in tuning for practical privacy.