These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning makes it possible to train a machine learning model on
decentralized data. Bayesian networks are probabilistic graphical models that
have been widely used in artificial intelligence applications. Their popularity
stems from the fact they can be built by combining existing expert knowledge
with data and are highly interpretable, which makes them useful for decision
support, e.g. in healthcare. While some research has been published on the
federated learning of Bayesian networks, publications on Bayesian networks in a
vertically partitioned or heterogeneous data setting (where different variables
are located in different datasets) are limited, and suffer from important
omissions, such as the handling of missing data. In this article, we propose a
novel method called VertiBayes to train Bayesian networks (structure and
parameters) on vertically partitioned data, which can handle missing values as
well as an arbitrary number of parties. For structure learning we adapted the
widely used K2 algorithm with a privacy-preserving scalar product protocol. For
parameter learning, we use a two-step approach: first, we learn an intermediate
model using maximum likelihood by treating missing values as a special value
and then we train a model on synthetic data generated by the intermediate model
using the EM algorithm. The privacy guarantees of our approach are equivalent
to the ones provided by the privacy preserving scalar product protocol used. We
experimentally show our approach produces models comparable to those learnt
using traditional algorithms and we estimate the increase in complexity in
terms of samples, network size, and complexity. Finally, we propose two
alternative approaches to estimate the performance of the model using
vertically partitioned data and we show in experiments that they lead to
reasonably accurate estimates.