Training-Free Dynamic Upcycling of Expert Language Models

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training large language models is costly and struggles to balance expertise across diverse domains, while fine-tuning often leads to overfitting or catastrophic forgetting. To address these challenges, this work proposes Dynamic Upcycling Mixture of Experts (DUME), the first framework enabling dynamic fusion of expert models without any fine-tuning. DUME constructs a plug-and-play mixture-of-experts architecture via a closed-form ridge regression solution, eliminating the need for additional training or optimization. The method preserves 97.6% of expert model performance in causal language modeling and even achieves a 102.1% relative gain on inference tasks, significantly outperforming existing baselines. This approach enables efficient, scalable, and unified multi-task modeling with minimal computational overhead.
📝 Abstract
Large Language Models (LLMs) have achieved remarkable performance on a wide range of specialized tasks, exhibiting strong problem-solving capabilities. However, training these models is prohibitively expensive, and they often lack domain-specific expertise because they rely on general knowledge datasets. Expertise finetuning can address this issue; however, it often leads to overspecialization, and developing a single multi-domain expert remains difficult due to diverging objectives. Furthermore, multitask training is challenging due to interference and catastrophic forgetting. Existing work proposes combining the expertise of dense models within a Mixture of Experts (MoE) architecture, although this approach still requires multitask finetuning. To address these issues, we introduce Dynamic Upcycling MoE (DUME), a novel approach that reuses dense experts trained on different domains to construct a unified MoE model. Our method builds a single multitask model that preserves the capabilities of the original dense experts without requiring additional training. DUME is both cost-efficient and scalable: by leveraging the closed-form solution of ridge regression, it eliminates the need for further optimization and enables experts to be added dynamically while maintaining the model's original performance. We demonstrate that DUME consistently outperforms baseline approaches in both causal language modeling and reasoning settings. Finally, we also show that the DUME model can be fine-tuned to further improve performance. We show that, in the causal language modeling setting, DUME can retain up to 97.6% of a dense expert model specialized in one particular domain, and that it can also surpass it in the reasoning setting, where it can achieve 102.1% of the dense expert performance. Our code is available at: github.com/gensyn-ai/dume.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Domain Expertise
Mixture of Experts
Multitask Learning
Catastrophic Forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-Free
Mixture of Experts
Dynamic Upcycling
Ridge Regression
Multitask Language Modeling
🔎 Similar Papers
No similar papers found.