🤖 AI Summary
This work addresses the notable gap in multi-step reasoning performance between large vision-language models (LVLMs) and text-only large language models (LLMs). The study reveals, for the first time, that both model types share a substantial number of modality-invariant reasoning subspaces at the neuronal level. Building on this insight, the authors propose a parameter-efficient method for cross-modal reasoning transfer: by employing a low-rank fusion mechanism, they selectively inject activations from shared neurons in LLMs into LVLMs, circumventing the need for extensive multimodal fine-tuning. Experiments demonstrate that this approach significantly enhances LVLMs’ reasoning capabilities across multiple mathematical and perceptual benchmarks while preserving their original perceptual skills, thereby validating the role of shared neurons as an interpretable bridge across modalities.
📝 Abstract
Large vision-language models (LVLMs) have rapidly advanced across various domains, yet they still lag behind strong text-only large language models (LLMs) on tasks that require multi-step inference and compositional decision-making. Motivated by their shared transformer architectures, we investigate whether the two model families rely on common internal computation for such inference. At the neuron level, we uncover a surprisingly large overlap: more than half of the top-activated units during multi-step inference are shared between representative LLMs and LVLMs, revealing a modality-invariant inference subspace.
Through causal probing via activation amplification, we further show that these shared neurons encode consistent and interpretable concept-level effects, demonstrating their functional contribution to inference. Building on this insight, we propose Shared Neuron Low-Rank Fusion (SNRF), a parameter-efficient framework that transfers mature inference circuitry from LLMs to LVLMs. SNRF profiles cross-model activations to identify shared neurons, computes a low-rank approximation of inter-model weight differences, and injects these updates selectively within the shared-neuron subspace. This mechanism strengthens multimodal inference performance with minimal parameter changes and requires no large-scale multimodal fine-tuning.
Across diverse mathematics and perception benchmarks, SNRF consistently enhances LVLM inference performance while preserving perceptual capabilities. Our results demonstrate that shared neurons form an interpretable bridge between LLMs and LVLMs, enabling low-cost transfer of inference ability into multimodal models. Our code is available at [https://github.com/chenhangcuisg-code/Do-LLMs-VLMs-Share-Neurons](https://github.com/chenhangcuisg-code/Do-LLMs-VLMs-Share-Neurons).