🤖 AI Summary
To address the prohibitively high computational overhead of federated fine-tuning of Mixture-of-Experts (MoE) large language models on resource-constrained devices (e.g., consumer-grade GPUs), this paper proposes a sparse-activation federated fine-tuning framework. Our method integrates model quantization, federated learning, and expert activation optimization. Key contributions include: (1) a quantized local performance estimation mechanism for lightweight feasibility assessment of client-side training; (2) a layer-aware adaptive expert merging strategy that dynamically compresses expert architectures to reduce both communication and computation costs; and (3) an exploration-exploitation-balanced dynamic expert role assignment scheme to enhance global convergence stability and accuracy. Evaluated on LLaMA-MoE and DeepSeek-MoE, our approach achieves up to a 4.75× speedup in time-to-accuracy—measured as wall-clock time required to reach target validation accuracy—significantly outperforming existing federated MoE baselines.
📝 Abstract
Federated fine-tuning of Mixture-of-Experts (MoE)-based large language models (LLMs) is challenging due to their massive computational requirements and the resource constraints of participants. Existing working attempts to fill this gap through model quantization, computation offloading, or expert pruning. However, they cannot achieve desired performance due to impractical system assumptions and a lack of consideration for MoE-specific characteristics. In this paper, we propose FLUX, a system designed to enable federated fine-tuning of MoE-based LLMs across participants with constrained computing resources (e.g., consumer-grade GPUs), aiming to minimize time-to-accuracy. FLUX introduces three key innovations: (1) quantization-based local profiling to estimate expert activation with minimal overhead, (2) adaptive layer-aware expert merging to reduce resource consumption while preserving accuracy, and (3) dynamic expert role assignment using an exploration-exploitation strategy to balance tuning and non-tuning experts. Extensive experiments on LLaMA-MoE and DeepSeek-MoE with multiple benchmark datasets demonstrate that FLUX significantly outperforms existing methods, achieving up to 4.75X speedup in time-to-accuracy.