Revisiting Model Interpolation for Efficient Reasoning

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the performance-cost trade-off of model interpolation—the simplest model merging technique—for efficient inference. We identify a three-phase dynamic evolution pattern (“degradation → transition → synergy”) in interpolation performance, and accordingly propose a precise interpolation framework guided by inference trajectory modulation. Our method employs direct weight interpolation only, augmented by layer- or module-level ablation analysis and decoding strategy adaptation to systematically optimize the interpolation path. Experiments across multiple reasoning tasks demonstrate that our approach significantly outperforms state-of-the-art complex merging methods (e.g., SLM, DARE), achieving up to 1.8× faster inference speed with zero parameter overhead, while consistently surpassing both individual base models and baseline merged models in effectiveness. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Model merging, typically on Instruct and Thinking models, has shown remarkable performance for efficient reasoning. In this paper, we systematically revisit the simplest merging method that interpolates two weights directly. Particularly, we observe that model interpolation follows a three-stage evolutionary paradigm with distinct behaviors on the reasoning trajectory. These dynamics provide a principled guide for navigating the performance-cost trade-off. Empirical results demonstrate that a strategically interpolated model surprisingly surpasses sophisticated model merging baselines on both efficiency and effectiveness. We further validate our findings with extensive ablation studies on model layers, modules, and decoding strategies. Ultimately, this work demystifies model interpolation and offers a practical framework for crafting models with precisely targeted reasoning capabilities. Code is available at href{https://github.com/wutaiqiang/MI}{Github}.
Problem

Research questions and friction points this paper is trying to address.

Revisiting model interpolation for efficient reasoning capabilities
Analyzing three-stage evolutionary paradigm of reasoning trajectories
Optimizing performance-cost trade-off through strategic weight interpolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model interpolation merges two weights directly
Three-stage paradigm guides performance-cost trade-off
Strategic interpolation surpasses sophisticated merging baselines
🔎 Similar Papers
No similar papers found.
Taiqiang Wu
Taiqiang Wu
University of Hong Kong | Tsinghua University
Model CompressionEfficient Methods
Runming Yang
Runming Yang
Tsinghua University
LLMDistillation
T
Tao Liu
Tsinghua University
J
Jiahao Wang
The University of Hong Kong
N
Ngai Wong
The University of Hong Kong