🤖 AI Summary
This study investigates the cross-modal alignment mechanism between visual and linguistic representations in vision-language models (VLMs), challenging the efficacy of prevailing linear adapter architectures for alignment. Adopting a frozen-LLM-and-ViT paradigm with only linear adapters fine-tuned, we introduce pretrained sparse autoencoders (SAEs) as *invariance probes*—a novel method to quantitatively characterize alignment dynamics. Our analysis reveals that visual representations do not align with the language space at the input layer; instead, alignment emerges progressively across transformer layers: ViT outputs exhibit fundamental misalignment with early LLM layers, while stable cross-modal alignment is achieved only in middle-to-late LLM layers. Crucially, SAE reconstruction error and sparsity evolution jointly form an interpretable, quantitative alignment trajectory. This work provides new empirical evidence for multimodal representation learning and establishes a principled, interpretable analytical framework for probing cross-modal alignment.
📝 Abstract
Effective multimodal reasoning depends on the alignment of visual and linguistic representations, yet the mechanisms by which vision-language models (VLMs) achieve this alignment remain poorly understood. We introduce a methodological framework that deliberately maintains a frozen large language model (LLM) and a frozen vision transformer (ViT), connected solely by training a linear adapter during visual instruction tuning. This design is fundamental to our approach: by keeping the language model frozen, we ensure it maintains its original language representations without adaptation to visual data. Consequently, the linear adapter must map visual features directly into the LLM's existing representational space rather than allowing the language model to develop specialized visual understanding through fine-tuning. Our experimental design uniquely enables the use of pre-trained sparse autoencoders (SAEs) of the LLM as analytical probes. These SAEs remain perfectly aligned with the unchanged language model and serve as a snapshot of the learned language feature-representations. Through systematic analysis of SAE reconstruction error, sparsity patterns, and feature SAE descriptions, we reveal the layer-wise progression through which visual representations gradually align with language feature representations, converging in middle-to-later layers. This suggests a fundamental misalignment between ViT outputs and early LLM layers, raising important questions about whether current adapter-based architectures optimally facilitate cross-modal representation learning.