FuseRL: Dense Preference Optimization for Heterogeneous Model Fusion

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing heterogeneous large language model (LLM) fusion methods select only the best output per prompt from source models, resulting in suboptimal knowledge utilization and sparse preference optimization signals. To address this, we propose a two-stage fusion framework: first, weighted supervised fine-tuning (FuseSFT) establishes a robust initialization to mitigate overfitting; second, dense weighted preference optimization (FusePO) leverages multi-model outputs to transform traditional sparse “best-output selection” into dense signal modeling, yielding gradient-rich and diversity-enhanced optimization signals. The framework is compatible with RLOO, DPO, and SimPO. Evaluated on Llama-3.1-8B-Instruct, it achieves state-of-the-art performance among 8B-class models on AlpacaEval-2 and Arena-Hard, while consistently improving alignment performance across diverse preference optimization methods—demonstrating both generality and effectiveness.

Technology Category

Application Category

📝 Abstract
Heterogeneous model fusion enhances the performance of LLMs by integrating the knowledge and capabilities of multiple structurally diverse models. However, existing approaches often rely solely on selecting the best output for each prompt from source models, which underutilizes their full potential due to limited source knowledge and results in sparse optimization signals. To address this limitation, we propose FuseRL, a novel two-stage framework comprising FuseSFT and FusePO to maximize the utilization of source LLMs. FuseSFT establishes a robust initialization by integrating the strengths of heterogeneous source models through weighted supervised fine-tuning (SFT) on diverse outputs for each prompt. FusePO optimizes weighted preferences based on the outputs of multiple source models to enable superior alignment performance. Extensive experiments demonstrate the effectiveness of our framework across various preference alignment methods, including RLOO, DPO, and SimPO. Using Llama-3.1-8B-Instruct as the target model, our approach achieves state-of-the-art performance among 8B LLMs on the AlpacaEval-2 and Arena-Hard benchmarks. Further analysis suggests that FuseSFT regularizes the training process to reduce overfitting, while FusePO introduces dense and diverse signals for preference optimization.
Problem

Research questions and friction points this paper is trying to address.

Optimizing fusion of diverse LLMs for enhanced performance
Addressing sparse signals in heterogeneous model integration
Improving alignment via dense preference optimization techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework FuseSFT and FusePO
Weighted supervised fine-tuning for initialization
Diverse preference optimization for alignment
🔎 Similar Papers
No similar papers found.