🤖 AI Summary
To address the dual challenges of insufficient privacy guarantees and high-dimensional communication overhead in Bayesian network structure learning (BNSL) under decentralized data, this paper proposes Fed-Sparse-BNSL—a novel federated learning framework integrating differential privacy, sparse gradient updates, and linear Gaussian modeling. Clients perform local sparse greedy search and upload only Laplace-noised sparse gradients, substantially reducing communication load and improving privacy budget efficiency. We theoretically establish that the learned structure remains identifiable under strong (ε,δ)-differential privacy. Experiments on synthetic and real-world datasets demonstrate that Fed-Sparse-BNSL achieves structural recovery accuracy close to non-private baselines, reduces total communication volume by up to 62%, and enhances privacy protection strength by an order of magnitude.
📝 Abstract
Learning the structure of a Bayesian network from decentralized data poses two major challenges: (i) ensuring rigorous privacy guarantees for participants, and (ii) avoiding communication costs that scale poorly with dimensionality. In this work, we introduce Fed-Sparse-BNSL, a novel federated method for learning linear Gaussian Bayesian network structures that addresses both challenges. By combining differential privacy with greedy updates that target only a few relevant edges per participant, Fed-Sparse-BNSL efficiently uses the privacy budget while keeping communication costs low. Our careful algorithmic design preserves model identifiability and enables accurate structure estimation. Experiments on synthetic and real datasets demonstrate that Fed-Sparse-BNSL achieves utility close to non-private baselines while offering substantially stronger privacy and communication efficiency.