Bayesian Federated Learning for Continual Training

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of temporal distributional shift and catastrophic forgetting in Bayesian federated learning (BFL) under dynamic environments, this paper proposes the first Continual Bayesian Federated Learning (CBFL) framework. CBFL leverages historical task posteriors as priors to guide Bayesian updates for new tasks, integrating sequential inference via Stochastic Gradient Langevin Dynamics (SGLD), uncertainty-aware modeling, and federated posterior aggregation. This unified design enables simultaneous knowledge retention and adaptive distributional alignment. Evaluated on radar-based human sensing, CBFL significantly enhances model robustness: it reduces Expected Calibration Error (ECE) by 32%, accelerates convergence by 1.8×, and effectively mitigates performance degradation induced by inter-day data drift.

Technology Category

Application Category

📝 Abstract
Bayesian Federated Learning (BFL) enables uncertainty quantification and robust adaptation in distributed learning. In contrast to the frequentist approach, it estimates the posterior distribution of a global model, offering insights into model reliability. However, current BFL methods neglect continual learning challenges in dynamic environments where data distributions shift over time. We propose a continual BFL framework applied to human sensing with radar data collected over several days. Using Stochastic Gradient Langevin Dynamics (SGLD), our approach sequentially updates the model, leveraging past posteriors to construct the prior for the new tasks. We assess the accuracy, the expected calibration error (ECE) and the convergence speed of our approach against several baselines. Results highlight the effectiveness of continual Bayesian updates in preserving knowledge and adapting to evolving data.
Problem

Research questions and friction points this paper is trying to address.

Address continual learning in Bayesian Federated Learning
Handle dynamic environments with shifting data distributions
Improve model reliability and adaptation in distributed learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Federated Learning for uncertainty quantification
Stochastic Gradient Langevin Dynamics for continual updates
Leveraging past posteriors for new task priors
🔎 Similar Papers
No similar papers found.