🤖 AI Summary
This work proposes a streaming merging paradigm that reframes model merging as an iterative optimization process, addressing the limitations of existing approaches which struggle to replicate the dynamic benefits of supervised fine-tuning and are often confined to post-hoc adjustments or task interference mitigation. By introducing Activation-guided Rotation-aware Merging (ARM), the method aligns semantic representations within activation subspaces and steers parameter trajectory updates accordingly. Notably, it requires only early-stage fine-tuning checkpoints and consistently outperforms fully supervised fine-tuned models across diverse domains—including mathematics and code—on models ranging from 1.7B to 14B parameters. This approach substantially enhances both the efficiency and scalability of model merging while preserving high performance.
📝 Abstract
The escalating scale of Large Language Models (LLMs) necessitates efficient adaptation techniques. Model merging has gained prominence for its efficiency and controllability. However, existing merging techniques typically serve as post-hoc refinements or focus on mitigating task interference, often failing to capture the dynamic optimization benefits of supervised fine-tuning (SFT). In this work, we propose Streaming Merging, an innovative model updating paradigm that conceptualizes merging as an iterative optimization process. Central to this paradigm is \textbf{ARM} (\textbf{A}ctivation-guided \textbf{R}otation-aware \textbf{M}erging), a strategy designed to approximate gradient descent dynamics. By treating merging coefficients as learning rates and deriving rotation vectors from activation subspaces, ARM effectively steers parameter updates along data-driven trajectories. Unlike conventional linear interpolation, ARM aligns semantic subspaces to preserve the geometric structure of high-dimensional parameter evolution. Remarkably, ARM requires only early SFT checkpoints and, through iterative merging, surpasses the fully converged SFT model. Experimental results across model scales (1.7B to 14B) and diverse domains (e.g., math, code) demonstrate that ARM can transcend converged checkpoints. Extensive experiments show that ARM provides a scalable and lightweight framework for efficient model adaptation.