Unlocking Personalized Knowledge in Federated Large Language Model: The Power of Mixture of Experts

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), sparse Mixture-of-Experts (MoE) large language models suffer from high communication and computational overhead, and limited personalized knowledge sharing, as their sparsity remains unexploited. To address this, we propose FLEx—a novel FL framework that leverages MoE sparsity intrinsically. FLEx trains and retains only one expert per client locally, while globally aggregating and sharing the backbone and gating modules. Crucially, it employs adaptive gating to dynamically activate personalized experts during inference, enabling expert-level customization without modifying the shared backbone. This constitutes the first FL architecture to deeply integrate MoE sparsity, introducing the paradigm of “local expert retention + global gating reintegration.” Evaluated under non-IID settings across multiple instruction-tuning datasets, FLEx significantly outperforms existing FL baselines—reducing communication cost and improving personalized model performance. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
The Mixture of Experts (MoE) architecture has emerged as a prominent strategy for scaling large language models (LLMs), effectively leveraging sparse activation and facilitating task-specific personalization. However, current federated learning (FL) approaches are primarily designed for dense models, making them unable to directly exploit the sparsity inherent in MoE architectures. Treating MoE models as dense networks in federated scenarios results in excessive communication overhead and computational costs, undermining the potential for personalized knowledge sharing. To address these challenges, we propose FLEx (Federated LLMs with Personalized Experts), a novel federated learning framework explicitly tailored for MoE-based LLMs. FLEx efficiently personalizes by pruning the global MoE model to keep only one expert per client, and employs an adaptive gating mechanism to reintegrate these personalized experts into the pre-trained MoE layers, ensuring the original backbone architecture remains unchanged. These personalized experts are trained with local data and stored locally on each client, while the shared modules are aggregated globally. Extensive evaluations on diverse instruction-based datasets under non-IID conditions consistently demonstrate that FLEx outperforms existing federated baselines. Our code is available at https://anonymous.4open.science/r/FLEx-8F12.
Problem

Research questions and friction points this paper is trying to address.

Optimizing federated learning for sparse MoE-based LLMs
Reducing communication and computational costs in FL
Enabling personalized knowledge sharing in federated scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Mixture of Experts for personalized federated learning
Prunes global MoE model to one expert per client
Adaptive gating reintegrates personalized experts globally
🔎 Similar Papers
No similar papers found.