Explainability and Continual Learning meet Federated Learning at the Network Edge

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) in resource-constrained, privacy-sensitive wireless edge networks faces fundamental challenges in interpretability, support for non-differentiable models (e.g., decision trees), and continual adaptation to evolving data distributions. Method: We propose the first multi-objective optimization framework tailored for edge FL, jointly optimizing accuracy, model interpretability, and communication efficiency. We design a distributed training mechanism enabling FL over non-differentiable decision trees—eliminating reliance on backpropagation. Furthermore, we introduce a novel federated continual learning paradigm that operates without replay buffers and instead leverages minimal local buffering for lifelong adaptive updates. Results: Extensive experiments demonstrate that our approach achieves a Pareto-optimal trade-off among high interpretability, low communication overhead, and strong generalization—all while preserving data privacy. It provides a systematic, trustworthy solution for intelligent edge systems.

Technology Category

Application Category

📝 Abstract
As edge devices become more capable and pervasive in wireless networks, there is growing interest in leveraging their collective compute power for distributed learning. However, optimizing learning at the network edge entails unique challenges, particularly when moving beyond conventional settings and objectives. While Federated Learning (FL) has emerged as a key paradigm for distributed model training, critical challenges persist. First, existing approaches often overlook the trade-off between predictive accuracy and interpretability. Second, they struggle to integrate inherently explainable models such as decision trees because their non-differentiable structure makes them not amenable to backpropagation-based training algorithms. Lastly, they lack meaningful mechanisms for continual Machine Learning (ML) model adaptation through Continual Learning (CL) in resource-limited environments. In this paper, we pave the way for a set of novel optimization problems that emerge in distributed learning at the network edge with wirelessly interconnected edge devices, and we identify key challenges and future directions. Specifically, we discuss how Multi-objective optimization (MOO) can be used to address the trade-off between predictive accuracy and explainability when using complex predictive models. Next, we discuss the implications of integrating inherently explainable tree-based models into distributed learning settings. Finally, we investigate how CL strategies can be effectively combined with FL to support adaptive, lifelong learning when limited-size buffers are used to store past data for retraining. Our approach offers a cohesive set of tools for designing privacy-preserving, adaptive, and trustworthy ML solutions tailored to the demands of edge computing and intelligent services.
Problem

Research questions and friction points this paper is trying to address.

Balancing predictive accuracy and explainability in edge-based federated learning
Integrating non-differentiable explainable models like decision trees into federated learning
Enabling continual learning in resource-limited federated edge environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective optimization balances accuracy and explainability
Integrates tree-based models in distributed learning
Combines continual learning with federated learning
🔎 Similar Papers
No similar papers found.