Adaptive Guidance for Local Training in Heterogeneous Federated Learning

📅 2024-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address parameter non-aggregability, objective misalignment, and low training efficiency arising from architectural heterogeneity in federated learning, this paper proposes FedL2G, a federated learning guidance framework. Its core innovation lies in a learnable, federated-level guidance mechanism: a lightweight guidance network is constructed via meta-learning to generate client-specific, locally aligned collaborative training directives; distributed first-order optimization coupled with local objective alignment regularization ensures convergence with only first-order gradient updates, theoretically guaranteeing an $O(1/T)$ convergence rate for non-convex objectives. Evaluated under a rigorous setting comprising six model-heterogeneous and two data-heterogeneous configurations—spanning 14 mainstream architectures (e.g., CNNs, ViTs)—FedL2G consistently outperforms seven state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Model heterogeneity poses a significant challenge in Heterogeneous Federated Learning (HtFL). In scenarios with diverse model architectures, directly aggregating model parameters is impractical, leading HtFL methods to incorporate an extra objective alongside the original local objective on each client to facilitate collaboration. However, this often results in a mismatch between the extra and local objectives. To resolve this, we propose Federated Learning-to-Guide (FedL2G), a method that adaptively learns to guide local training in a federated manner, ensuring the added objective aligns with each client's original goal. With theoretical guarantees, FedL2G utilizes only first-order derivatives w.r.t. model parameters, achieving a non-convex convergence rate of O(1/T). We conduct extensive experiments across two data heterogeneity and six model heterogeneity settings, using 14 heterogeneous model architectures (e.g., CNNs and ViTs). The results show that FedL2G significantly outperforms seven state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Heterogeneous Federated Learning
Model Parameter Fusion
Learning Efficiency and Target Consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

FedL2G
Heterogeneous Federated Learning
Consistent Learning Objectives
🔎 Similar Papers
No similar papers found.
J
Jianqing Zhang
Shanghai Jiao Tong University, Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3
Y
Yang Liu
Institute for AI Industry Research (AIR), Tsinghua University, Shanghai Artificial Intelligence Laboratory
Y
Yang Hua
Queen’s University Belfast
J
Jian Cao
Shanghai Jiao Tong University, Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3
Q
Qian Yang