DeepFusion: Accelerating MoE Training via Federated Knowledge Distillation from Heterogeneous Edge Devices

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of deploying full Mixture-of-Experts (MoE) models on resource-constrained heterogeneous edge devices, which hinders their participation in federated learning. To overcome this limitation, the authors propose DeepFusion, a novel framework that enables scalable federated MoE training for the first time. DeepFusion leverages federated knowledge distillation to aggregate knowledge from customized local large language models across devices and introduces a View-Aligned Attention (VAA) module to effectively mitigate prediction view mismatches inherent in cross-architecture knowledge distillation. Experiments on Qwen-MoE and DeepSeek-MoE demonstrate that the proposed method achieves performance comparable to centralized training while reducing communication overhead by up to 71% and improving token perplexity by as much as 5.28%.

Technology Category

Application Category

📝 Abstract
Recent Mixture-of-Experts (MoE)-based large language models (LLMs) such as Qwen-MoE and DeepSeek-MoE are transforming generative AI in natural language processing. However, these models require vast and diverse training data. Federated learning (FL) addresses this challenge by leveraging private data from heterogeneous edge devices for privacy-preserving MoE training. Nonetheless, traditional FL approaches require devices to host local MoE models, which is impractical for resource-constrained devices due to large model sizes. To address this, we propose DeepFusion, the first scalable federated MoE training framework that enables the fusion of heterogeneous on-device LLM knowledge via federated knowledge distillation, yielding a knowledge-abundant global MoE model. Specifically, DeepFusion features each device to independently configure and train an on-device LLM tailored to its own needs and hardware limitations. Furthermore, we propose a novel View-Aligned Attention (VAA) module that integrates multi-stage feature representations from the global MoE model to construct a predictive perspective aligned with on-device LLMs, thereby enabling effective cross-architecture knowledge distillation. By explicitly aligning predictive perspectives, VAA resolves the view-mismatch problem in traditional federated knowledge distillation, which arises from heterogeneity in model architectures and prediction behaviors between on-device LLMs and the global MoE model. Experiments with industry-level MoE models (Qwen-MoE and DeepSeek-MoE) and real-world datasets (medical and finance) demonstrate that DeepFusion achieves performance close to centralized MoE training. Compared with key federated MoE baselines, DeepFusion reduces communication costs by up to 71% and improves token perplexity by up to 5.28%.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
Federated Learning
Knowledge Distillation
Edge Devices
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Knowledge Distillation
Mixture-of-Experts
View-Aligned Attention
Heterogeneous Edge Devices
Scalable MoE Training
🔎 Similar Papers
No similar papers found.
S
Songyuan Li
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K.
Jia Hu
Jia Hu
University of Exeter
edge-cloud computingresource optimizationsmart citynetwork securityapplied machine learning
A
Ahmed M. Abdelmoniem
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K.
Geyong Min
Geyong Min
University of Exeter
H
Haojun Huang
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
Jiwei Huang
Jiwei Huang
Professor, China University of Petroleum, Beijing, China
Services ComputingPerformance EvaluationInternet of ThingsQuality of Service