Exploiting Features and Logits in Heterogeneous Federated Learning

📅 2022-10-27
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
In heterogeneous federated learning (HFL), low collaborative training efficiency arises from significant disparities in client device capabilities, non-independent and identically distributed (Non-IID) data, and architectural heterogeneity across local models. To address these challenges, this paper proposes a dual-path knowledge distillation framework that jointly models heterogeneity in both feature and logits spaces, introducing a bidirectional distillation mechanism to co-optimize local feature representations and output logits. The method integrates feature alignment, logits calibration, and heterogeneity-aware training, enabling cross-device model collaboration while preserving data locality and privacy. Evaluated on multiple HFL benchmarks, the approach achieves average accuracy improvements of 3.2–5.8%, accelerates convergence by 40%, and reduces communication overhead by 22%.
Problem

Research questions and friction points this paper is trying to address.

Handles heterogeneous client models in federated learning
Manages features and logits for collaborative model training
Addresses device capability diversity without sharing raw data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous FL with feature and logit averaging
Conditional VAE for synthetic feature generation
Data-free training across diverse client models
🔎 Similar Papers
No similar papers found.