These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The prevalence of machine learning in biomedical research is rapidly growing,
yet the trustworthiness of such research is often overlooked. While some
previous works have investigated the ability of adversarial attacks to degrade
model performance in medical imaging, the ability to falsely improve
performance via recently-developed "enhancement attacks" may be a greater
threat to biomedical machine learning. In the spirit of developing attacks to
better understand trustworthiness, we developed two techniques to drastically
enhance prediction performance of classifiers with minimal changes to features:
1) general enhancement of prediction performance, and 2) enhancement of a
particular method over another. Our enhancement framework falsely improved
classifiers' accuracy from 50% to almost 100% while maintaining high feature
similarities between original and enhanced data (Pearson's r's>0.99).
Similarly, the method-specific enhancement framework was effective in falsely
improving the performance of one method over another. For example, a simple
neural network outperformed logistic regression by 17% on our enhanced dataset,
although no performance differences were present in the original dataset.
Crucially, the original and enhanced data were still similar (r=0.99). Our
results demonstrate the feasibility of minor data manipulations to achieve any
desired prediction performance, which presents an interesting ethical challenge
for the future of biomedical machine learning. These findings emphasize the
need for more robust data provenance tracking and other precautionary measures
to ensure the integrity of biomedical machine learning research.