Cross-Training with Multi-View Knowledge Fusion for Heterogenous Federated Learning

📅 2024-05-30
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
To address local-to-global knowledge forgetting and feature-space inconsistency caused by data distribution shifts in heterogeneous federated learning, this paper proposes FedCT, a cross-training framework. Methodologically, FedCT introduces (1) consistency-aware knowledge broadcasting to robustly transfer global class prototypes to local models; (2) multi-view knowledge-guided representation learning that jointly models global class prototypes and local discriminative structures; and (3) a MixUp-enhanced feature alignment mechanism to improve cross-device generalization. Evaluated on four standard benchmarks, FedCT significantly outperforms state-of-the-art methods. Ablation studies and case analyses validate that the three components synergistically mitigate knowledge forgetting while enhancing representation stability and class discriminability.

Technology Category

Application Category

📝 Abstract
Federated learning benefits from cross-training strategies, which enables models to train on data from distinct sources to improve the generalization capability. However, the data heterogeneity between sources may lead models to gradually forget previously acquired knowledge when undergoing cross-training to adapt to new tasks or data sources. We argue that integrating personalized and global knowledge to gather information from multiple perspectives could potentially improve performance. To achieve this goal, this paper presents a novel approach that enhances federated learning through a cross-training scheme incorporating multi-view information. Specifically, the proposed method, termed FedCT, includes three main modules, where the consistency-aware knowledge broadcasting module aims to optimize model assignment strategies, which enhances collaborative advantages between clients and achieves an efficient federated learning process. The multi-view knowledge-guided representation learning module leverages fused prototypical knowledge from both global and local views to enhance the preservation of local knowledge before and after model exchange, as well as to ensure consistency between local and global knowledge. The mixup-based feature augmentation module aggregates rich information to further increase the diversity of feature spaces, which enables the model to better discriminate complex samples. Extensive experiments were conducted on four datasets in terms of performance comparison, ablation study, in-depth analysis and case study. The results demonstrated that FedCT alleviates knowledge forgetting from both local and global views, which enables it outperform state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses data heterogeneity in federated learning cross-training
Aligns local and global knowledge via multi-view distillation
Enhances feature diversity to improve sample discrimination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Consistency-aware knowledge broadcasting for model assignment
Multi-view knowledge-guided representation learning
Mixup-based feature augmentation for diversity
🔎 Similar Papers
No similar papers found.
Z
Zhuang Qi
School of Software, Shandong University, China
L
Lei Meng
Shandong University and Shandong Research Institute of Industrial Technology, China
W
Weihao He
School of Software, Shandong University
Ruohan Zhang
Ruohan Zhang
Stanford University
RoboticsCognitive ScienceBrain-Machine InterfaceMachine LearningArt
Y
Yu Wang
Shandong Research Institute of Industrial Technology
X
Xin Qi
School of Chemistry and Life Sciences, Suzhou University of Science and Technology, China
X
Xiangxu Meng
School of Software, Shandong University, China