Machine learning has proved invaluable for a range of different tasks, yet it
also proved vulnerable to evasion attacks, i.e., maliciously crafted
perturbations of input data designed to force mispredictions. In this paper we
propose a novel technique to verify the security of decision tree models
against evasion attacks with respect to an expressive threat model, where the
attacker can be represented by an arbitrary imperative program. Our approach
exploits the interpretability property of decision trees to transform them into
imperative programs, which are amenable for traditional program analysis
techniques. By leveraging the abstract interpretation framework, we are able to
soundly verify the security guarantees of decision tree models trained over
publicly available datasets. Our experiments show that our technique is both
precise and efficient, yielding only a minimal number of false positives and
scaling up to cases which are intractable for a competitor approach.