Can MLLMs Absorb Math Reasoning Abilities from LLMs as Free Lunch?

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Can multimodal large language models (MLLMs) directly inherit the strong mathematical reasoning capabilities of unimodal large language models (LLMs) without fine-tuning? Method: This paper proposes a tuning-free parameter fusion framework. We first identify a significant parameter-space misalignment between MLLMs and LLMs; then design a layer-importance-driven mechanism to identify critical reasoning layers, and project heterogeneous parameters into a shared subspace to ensure model alignment. Contribution: Our method substantially improves MLLMs’ mathematical reasoning performance—e.g., on GSM8K and MATH—without additional training or data, while fully preserving their original vision-language understanding capabilities. It establishes a novel, interpretable, efficient, and generalizable paradigm for cross-modal capability transfer, offering a principled approach to leveraging pre-trained LLM reasoning in multimodal settings.

Technology Category

Application Category

📝 Abstract
Math reasoning has been one crucial ability of large language models (LLMs), where significant advancements have been achieved in recent years. However, most efforts focus on LLMs by curating high-quality annotation data and intricate training (or inference) paradigms, while the math reasoning performance of multi-modal LLMs (MLLMs) remains lagging behind. Since the MLLM typically consists of an LLM and a vision block, we wonder: Can MLLMs directly absorb math reasoning abilities from off-the-shelf math LLMs without tuning? Recent model-merging approaches may offer insights into this question. However, they overlook the alignment between the MLLM and LLM, where we find that there is a large gap between their parameter spaces, resulting in lower performance. Our empirical evidence reveals two key factors behind this issue: the identification of crucial reasoning-associated layers in the model and the mitigation of the gaps in parameter space. Based on the empirical insights, we propose IP-Merging that first identifies the reasoning-associated parameters in both MLLM and Math LLM, then projects them into the subspace of MLLM, aiming to maintain the alignment, and finally merges parameters in this subspace. IP-Merging is a tuning-free approach since parameters are directly adjusted. Extensive experiments demonstrate that our IP-Merging method can enhance the math reasoning ability of MLLMs directly from Math LLMs without compromising their other capabilities.
Problem

Research questions and friction points this paper is trying to address.

MLLMs lag behind LLMs in math reasoning
Model merging overlooks parameter space alignment
Propose tuning-free method to transfer math abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies reasoning-associated layers in models
Projects parameters into MLLM subspace for alignment
Merges parameters without tuning to enhance reasoning
🔎 Similar Papers
No similar papers found.