Federated Concept-Based Models: Interpretable models with distributed supervision

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of interpretable modeling paradigms in existing federated learning frameworks, which struggle to effectively leverage scarce and decentralized concept annotations in heterogeneous, dynamic cross-institutional settings. To this end, we propose Federated Concept Models (F-CMs), the first approach to enable interpretable modeling in federated learning with dynamic incorporation of new concept-level supervision. F-CMs integrate concept-based models with federated learning through a dynamic architecture adaptation mechanism, concept space fusion, and a privacy-preserving distributed training protocol, thereby supporting interpretable inference on locally unseen concepts while safeguarding institutional data privacy. Experimental results demonstrate that F-CMs significantly outperform non-adaptive federated baselines while maintaining accuracy and intervention efficacy comparable to fully supervised models.

Technology Category

Application Category

📝 Abstract
Concept-based models (CMs) enhance interpretability in deep learning by grounding predictions in human-understandable concepts. However, concept annotations are expensive to obtain and rarely available at scale within a single data source. Federated learning (FL) could alleviate this limitation by enabling cross-institutional training that leverages concept annotations distributed across multiple data owners. Yet, FL lacks interpretable modeling paradigms. Integrating CMs with FL is non-trivial: CMs assume a fixed concept space and a predefined model architecture, whereas real-world FL is heterogeneous and non-stationary, with institutions joining over time and bringing new supervision. In this work, we propose Federated Concept-based Models (F-CMs), a new methodology for deploying CMs in evolving FL settings. F-CMs aggregate concept-level information across institutions and efficiently adapt the model architecture in response to changes in the available concept supervision, while preserving institutional privacy. Empirically, F-CMs preserve the accuracy and intervention effectiveness of training settings with full concept supervision, while outperforming non-adaptive federated baselines. Notably, F-CMs enable interpretable inference on concepts not available to a given institution, a key novelty with respect to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Concept-based Models
Interpretability
Distributed Supervision
Model Heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Concept-based Models
Interpretable AI
Dynamic Architecture Adaptation
Distributed Supervision
🔎 Similar Papers
No similar papers found.