🤖 AI Summary
This work investigates the performance-cost trade-off of model interpolation—the simplest model merging technique—for efficient inference. We identify a three-phase dynamic evolution pattern (“degradation → transition → synergy”) in interpolation performance, and accordingly propose a precise interpolation framework guided by inference trajectory modulation. Our method employs direct weight interpolation only, augmented by layer- or module-level ablation analysis and decoding strategy adaptation to systematically optimize the interpolation path. Experiments across multiple reasoning tasks demonstrate that our approach significantly outperforms state-of-the-art complex merging methods (e.g., SLM, DARE), achieving up to 1.8× faster inference speed with zero parameter overhead, while consistently surpassing both individual base models and baseline merged models in effectiveness. The implementation is publicly available.
📝 Abstract
Model merging, typically on Instruct and Thinking models, has shown remarkable performance for efficient reasoning. In this paper, we systematically revisit the simplest merging method that interpolates two weights directly. Particularly, we observe that model interpolation follows a three-stage evolutionary paradigm with distinct behaviors on the reasoning trajectory. These dynamics provide a principled guide for navigating the performance-cost trade-off. Empirical results demonstrate that a strategically interpolated model surprisingly surpasses sophisticated model merging baselines on both efficiency and effectiveness. We further validate our findings with extensive ablation studies on model layers, modules, and decoding strategies. Ultimately, this work demystifies model interpolation and offers a practical framework for crafting models with precisely targeted reasoning capabilities. Code is available at href{https://github.com/wutaiqiang/MI}{Github}.