These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
As machine learning models become increasingly prevalent in critical
decision-making models and systems in fields like finance, healthcare, etc.,
ensuring their robustness against adversarial attacks and changes in the input
data is paramount, especially in cases where models potentially overfit. This
paper proposes a comprehensive framework for assessing the robustness of
machine learning models through covariate perturbation techniques. We explore
various perturbation strategies to assess robustness and examine their impact
on model predictions, including separate strategies for numeric and non-numeric
variables, summaries of perturbations to assess and compare model robustness
across different scenarios, and local robustness diagnosis to identify any
regions in the data where a model is particularly unstable. Through empirical
studies on real world dataset, we demonstrate the effectiveness of our approach
in comparing robustness across models, identifying the instabilities in the
model, and enhancing model robustness.