Differentially Private and Federated Structure Learning in Bayesian Networks

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of insufficient privacy guarantees and high-dimensional communication overhead in Bayesian network structure learning (BNSL) under decentralized data, this paper proposes Fed-Sparse-BNSL—a novel federated learning framework integrating differential privacy, sparse gradient updates, and linear Gaussian modeling. Clients perform local sparse greedy search and upload only Laplace-noised sparse gradients, substantially reducing communication load and improving privacy budget efficiency. We theoretically establish that the learned structure remains identifiable under strong (ε,δ)-differential privacy. Experiments on synthetic and real-world datasets demonstrate that Fed-Sparse-BNSL achieves structural recovery accuracy close to non-private baselines, reduces total communication volume by up to 62%, and enhances privacy protection strength by an order of magnitude.

Technology Category

Application Category

📝 Abstract
Learning the structure of a Bayesian network from decentralized data poses two major challenges: (i) ensuring rigorous privacy guarantees for participants, and (ii) avoiding communication costs that scale poorly with dimensionality. In this work, we introduce Fed-Sparse-BNSL, a novel federated method for learning linear Gaussian Bayesian network structures that addresses both challenges. By combining differential privacy with greedy updates that target only a few relevant edges per participant, Fed-Sparse-BNSL efficiently uses the privacy budget while keeping communication costs low. Our careful algorithmic design preserves model identifiability and enables accurate structure estimation. Experiments on synthetic and real datasets demonstrate that Fed-Sparse-BNSL achieves utility close to non-private baselines while offering substantially stronger privacy and communication efficiency.
Problem

Research questions and friction points this paper is trying to address.

Ensures privacy in decentralized Bayesian network learning
Reduces communication costs in federated structure estimation
Maintains model accuracy while protecting participant data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning with differential privacy
Greedy updates targeting few edges
Preserves identifiability and communication efficiency
🔎 Similar Papers
No similar papers found.
G
Ghita Fassy El Fehri
Inria, Université de Montpellier, INSERM
Aurélien Bellet
Aurélien Bellet
Research scientist at Inria
Machine LearningArtificial IntelligenceData SciencePrivacyFederated Learning
P
Philippe Bastien
L’Oréal Research and Innovation, Aulnay-Sous-Bois