These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In this paper we criticize the robustness measure traditionally employed to
assess the performance of machine learning models deployed in adversarial
settings. To mitigate the limitations of robustness, we introduce a new measure
called resilience and we focus on its verification. In particular, we discuss
how resilience can be verified by combining a traditional robustness
verification technique with a data-independent stability analysis, which
identifies a subset of the feature space where the model does not change its
predictions despite adversarial manipulations. We then introduce a formally
sound data-independent stability analysis for decision trees and decision tree
ensembles, which we experimentally assess on public datasets and we leverage
for resilience verification. Our results show that resilience verification is
useful and feasible in practice, yielding a more reliable security assessment
of both standard and robust decision tree models.