These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The big-data revolution announced ten years ago does not seem to have fully
happened at the expected scale. One of the main obstacle to this, has been the
lack of data circulation. And one of the many reasons people and organizations
did not share as much as expected is the privacy risk associated with data
sharing operations. There has been many works on practical systems to compute
statistical queries with Differential Privacy (DP). There have also been
practical implementations of systems to train Neural Networks with DP, but
relatively little efforts have been dedicated to designing scalable classical
Machine Learning (ML) models providing DP guarantees. In this work we describe
and implement a DP fork of a battle tested ML model: XGBoost. Our approach
beats by a large margin previous attempts at the task, in terms of accuracy
achieved for a given privacy budget. It is also the only DP implementation of
boosted trees that scales to big data and can run in distributed environments
such as: Kubernetes, Dask or Apache Spark.