Data labels in the security field are frequently noisy, limited, or biased
towards a subset of the population. As a result, commonplace evaluation methods
such as accuracy, precision and recall metrics, or analysis of performance
curves computed from labeled datasets do not provide sufficient confidence in
the real-world performance of a machine learning (ML) model. This has slowed
the adoption of machine learning in the field. In the industry today, we rely
on domain expertise and lengthy manual evaluation to build this confidence
before shipping a new model for security applications. In this paper, we
introduce Firenze, a novel framework for comparative evaluation of ML models'
performance using domain expertise, encoded into scalable functions called
markers. We show that markers computed and combined over select subsets of
samples called regions of interest can provide a robust estimate of their
real-world performances. Critically, we use statistical hypothesis testing to
ensure that observed differences-and therefore conclusions emerging from our
framework-are more prominent than that observable from the noise alone. Using
simulations and two real-world datasets for malware and domain-name-service
reputation detection, we illustrate our approach's effectiveness, limitations,
and insights. Taken together, we propose Firenze as a resource for fast,
interpretable, and collaborative model development and evaluation by mixed
teams of researchers, domain experts, and business owners.