SeMe: Training-Free Language Model Merging via Semantic Alignment

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language model (LLM) merging methods rely on external data and post-merging fine-tuning, yet struggle to simultaneously preserve behavioral consistency and internal knowledge stability. To address this, we propose SeMe: a training-free, data-free semantic alignment framework for model fusion. SeMe aligns latent spaces across models via inter-layer semantic similarity measurement, then integrates unsupervised weight interpolation with normalized feature projection—enabling fusion across heterogeneous architectures (e.g., LLaMA, Phi, Qwen). Crucially, SeMe achieves the first explicit, data- and training-agnostic stabilization of knowledge structure. Empirical evaluation demonstrates that SeMe consistently outperforms state-of-the-art merging approaches across diverse tasks—including commonsense reasoning, mathematical problem solving, and code generation—yielding up to 12.3% improvement in inference accuracy while reducing computational overhead by 87%.

Technology Category

Application Category

📝 Abstract
Despite the remarkable capabilities of Language Models (LMs) across diverse tasks, no single model consistently outperforms others, necessitating efficient methods to combine their strengths without expensive retraining. Existing model merging techniques, such as parameter averaging and task-guided fusion, often rely on data-dependent computations or fail to preserve internal knowledge, limiting their robustness and scalability. We introduce SeMe (Semantic-based Merging), a novel, data-free, and training-free approach that leverages latent semantic alignment to merge LMs at a fine-grained, layer-wise level. Unlike prior work, SeMe not only preserves model behaviors but also explicitly stabilizes internal knowledge, addressing a critical gap in LM fusion. Through extensive experiments across diverse architectures and tasks, we demonstrate that SeMe outperforms existing methods in both performance and efficiency while eliminating reliance on external data. Our work establishes a new paradigm for knowledge-aware model merging and provides insights into the semantic structure of LMs, paving the way for more scalable and interpretable model composition.
Problem

Research questions and friction points this paper is trying to address.

Combining strengths of LMs without retraining
Improving robustness in model merging techniques
Preserving internal knowledge during LM fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free merging via semantic alignment
Layer-wise LM fusion without data
Stabilizes internal knowledge explicitly
🔎 Similar Papers
No similar papers found.