Federated learning is a distributed machine learning paradigm that enables
collaborative training across multiple parties while ensuring data privacy.
Gradient Boosting Decision Trees (GBDT), such as XGBoost, have gained
popularity due to their high performance and strong interpretability.
Therefore, there has been a growing interest in adapting XGBoost for use in
federated settings via cryptographic techniques. However, it should be noted
that these approaches may not always provide rigorous theoretical privacy
guarantees, and they often come with a high computational cost in terms of time
and space requirements. In this paper, we propose a variant of vertical
federated XGBoost with bilateral differential privacy guarantee: MaskedXGBoost.
We build well-calibrated noise to perturb the intermediate information to
protect privacy. The noise is structured with part of its ingredients in the
null space of the arithmetical operation for splitting score evaluation in
XGBoost, helping us achieve consistently better utility than other perturbation
methods and relatively lower overhead than encryption-based techniques. We
provide theoretical utility analysis and empirically verify privacy
preservation. Compared with other algorithms, our algorithm's superiority in
both utility and efficiency has been validated on multiple datasets.