🤖 AI Summary
Existing heterogeneous large language model (LLM) fusion methods select only the best output per prompt from source models, resulting in suboptimal knowledge utilization and sparse preference optimization signals. To address this, we propose a two-stage fusion framework: first, weighted supervised fine-tuning (FuseSFT) establishes a robust initialization to mitigate overfitting; second, dense weighted preference optimization (FusePO) leverages multi-model outputs to transform traditional sparse “best-output selection” into dense signal modeling, yielding gradient-rich and diversity-enhanced optimization signals. The framework is compatible with RLOO, DPO, and SimPO. Evaluated on Llama-3.1-8B-Instruct, it achieves state-of-the-art performance among 8B-class models on AlpacaEval-2 and Arena-Hard, while consistently improving alignment performance across diverse preference optimization methods—demonstrating both generality and effectiveness.
📝 Abstract
Heterogeneous model fusion enhances the performance of LLMs by integrating the knowledge and capabilities of multiple structurally diverse models. However, existing approaches often rely solely on selecting the best output for each prompt from source models, which underutilizes their full potential due to limited source knowledge and results in sparse optimization signals. To address this limitation, we propose FuseRL, a novel two-stage framework comprising FuseSFT and FusePO to maximize the utilization of source LLMs. FuseSFT establishes a robust initialization by integrating the strengths of heterogeneous source models through weighted supervised fine-tuning (SFT) on diverse outputs for each prompt. FusePO optimizes weighted preferences based on the outputs of multiple source models to enable superior alignment performance. Extensive experiments demonstrate the effectiveness of our framework across various preference alignment methods, including RLOO, DPO, and SimPO. Using Llama-3.1-8B-Instruct as the target model, our approach achieves state-of-the-art performance among 8B LLMs on the AlpacaEval-2 and Arena-Hard benchmarks. Further analysis suggests that FuseSFT regularizes the training process to reduce overfitting, while FusePO introduces dense and diverse signals for preference optimization.