Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the systematic geometric misalignment—commonly referred to as the modality gap—between visual and linguistic representations in multimodal large language models, which hinders effective alignment and scalability. The authors propose a theoretical framework that decomposes the modality gap within a fixed reference frame, thereby relaxing the isotropic assumption inherent in conventional contrastive learning. This enables, for the first time, statistical alignment using only large-scale unpaired data without requiring aligned image–text pairs. Building on this theory, they introduce ReAlign, a training-free three-step alignment strategy (Anchor-Trace-Centroid), and ReVision, a scalable pretraining paradigm. Experiments demonstrate that the proposed approach significantly improves representation alignment without relying on high-quality paired data, offering a novel pathway toward efficient scaling of multimodal models.

Technology Category

Application Category

📝 Abstract
Despite the success of multimodal contrastive learning in aligning visual and linguistic representations, a persistent geometric anomaly, the Modality Gap, remains: embeddings of distinct modalities expressing identical semantics occupy systematically offset regions. Prior approaches to bridge this gap are largely limited by oversimplified isotropic assumptions, hindering their application in large-scale scenarios. In this paper, we address these limitations by precisely characterizing the geometric shape of the modality gap and leveraging it for efficient model scaling. First, we propose the Fixed-frame Modality Gap Theory, which decomposes the modality gap within a frozen reference frame into stable biases and anisotropic residuals. Guided by this precise modeling, we introduce ReAlign, a training-free modality alignment strategy. Utilizing statistics from massive unpaired data, ReAlign aligns text representation into the image representation distribution via a three-step process comprising Anchor, Trace, and Centroid Alignment, thereby explicitly rectifying geometric misalignment. Building on ReAlign, we propose ReVision, a scalable training paradigm for Multimodal Large Language Models (MLLMs). ReVision integrates ReAlign into the pretraining stage, enabling the model to learn the distribution of visual representations from unpaired text before visual instruction tuning, without the need for large-scale, high-quality image-text pairs. Our framework demonstrates that statistically aligned unpaired data can effectively substitute for expensive image-text pairs, offering a robust path for the efficient scaling of MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Modality Gap
Multimodal Large Language Models
Geometric Misalignment
Embedding Alignment
Unpaired Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality Gap
Subspace Alignment
Training-free Alignment
Unpaired Data
Multimodal Large Language Models
🔎 Similar Papers
No similar papers found.