🤖 AI Summary
In federated learning (FL), sparse Mixture-of-Experts (MoE) large language models suffer from high communication and computational overhead, and limited personalized knowledge sharing, as their sparsity remains unexploited. To address this, we propose FLEx—a novel FL framework that leverages MoE sparsity intrinsically. FLEx trains and retains only one expert per client locally, while globally aggregating and sharing the backbone and gating modules. Crucially, it employs adaptive gating to dynamically activate personalized experts during inference, enabling expert-level customization without modifying the shared backbone. This constitutes the first FL architecture to deeply integrate MoE sparsity, introducing the paradigm of “local expert retention + global gating reintegration.” Evaluated under non-IID settings across multiple instruction-tuning datasets, FLEx significantly outperforms existing FL baselines—reducing communication cost and improving personalized model performance. The implementation is publicly available.
📝 Abstract
The Mixture of Experts (MoE) architecture has emerged as a prominent strategy for scaling large language models (LLMs), effectively leveraging sparse activation and facilitating task-specific personalization. However, current federated learning (FL) approaches are primarily designed for dense models, making them unable to directly exploit the sparsity inherent in MoE architectures. Treating MoE models as dense networks in federated scenarios results in excessive communication overhead and computational costs, undermining the potential for personalized knowledge sharing. To address these challenges, we propose FLEx (Federated LLMs with Personalized Experts), a novel federated learning framework explicitly tailored for MoE-based LLMs. FLEx efficiently personalizes by pruning the global MoE model to keep only one expert per client, and employs an adaptive gating mechanism to reintegrate these personalized experts into the pre-trained MoE layers, ensuring the original backbone architecture remains unchanged. These personalized experts are trained with local data and stored locally on each client, while the shared modules are aggregated globally. Extensive evaluations on diverse instruction-based datasets under non-IID conditions consistently demonstrate that FLEx outperforms existing federated baselines. Our code is available at https://anonymous.4open.science/r/FLEx-8F12.