CAFEDistill: Learning Personalized and Dynamic Models through Federated Early-Exit Network Distillation

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing dynamic inference demands and resource efficiency across heterogeneous clients in personalized federated learning. The authors propose a conflict-aware federated early-exit network distillation framework that jointly optimizes personalization and dynamic inference through a depth-first student coordination mechanism and a client-decoupled communication strategy. This approach effectively mitigates conflicts among exit heads and enhances cross-client knowledge transfer. Experimental results demonstrate that the method significantly outperforms state-of-the-art approaches, achieving higher accuracy while reducing inference costs by 30.79%–46.86%.

Technology Category

Application Category

📝 Abstract
Personalized Federated Learning (PFL) enables collaboratively model training on decentralized, heterogeneous data while tailoring them to each client's unique distribution. However, existing PFL methods produce static models with a fixed tradeoff between accuracy and efficiency, limiting their applicability in environments where inference requirements vary with contexts and resource availability. Early-exit networks (EENs) offer adaptive inference by attaching intermediate classifiers. Yet integrating them into PFL is challenging due to client-wise heterogeneity and depth-wise interference arising from conflicting exit objectives. Prior studies fail to resolve both conflicts simultaneously, leading to suboptimal performance. In this paper, we propose CAFEDistill, a Conflict-Aware Federated Exit Distillation framework that jointly addresses these conflicts and extends PFL to early-exit networks. Through a progressive, depth-prioritized student coordination mechanism, CAFEDistill mitigates interference among shallow and deep exits while allowing effective personalized knowledge transfer across clients. Furthermore, it reduces communication overhead via a client-decoupled formulation. Extensive evaluations show that CAFEDistill outperforms the state-of-the-arts, achieving higher accuracy and reducing inference costs by 30.79%-46.86%.
Problem

Research questions and friction points this paper is trying to address.

Personalized Federated Learning
Early-exit Networks
Heterogeneity
Depth-wise Interference
Dynamic Inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized Federated Learning
Early-Exit Networks
Federated Distillation
Conflict-Aware Coordination
Communication Efficiency
🔎 Similar Papers
No similar papers found.
Boyi Liu
Boyi Liu
Snowflake AI Research
Reinforcement LearningLLMAI Agent
Z
Zimu Zhou
DS, City University of Hong Kong
Y
Yongxin Tong
SKLCCSE, Beihang University