🤖 AI Summary
In vision-language models (VLMs), visual encoder outputs are projected into the language embedding space via a connector module, often inducing substantial information loss—yet the underlying mechanisms and quantitative impact remain poorly understood. Method: We propose a dual-perspective diagnostic framework: (i) quantifying semantic fidelity via k-nearest-neighbor (k-NN) structural distortion in the joint embedding space, and (ii) localizing image-patch-level information loss through embedding reconstruction inversion. Leveraging mainstream pre-trained VLM architectures, we combine representation space analysis with reconstruction-based inversion techniques. Results: Our analysis reveals that connectors distort 40–60% of local neighborhood relationships; critically, high-loss regions exhibit strong spatial correlation with performance degradation in visual question answering. This work provides the first interpretable, spatially localized, and quantitatively rigorous characterization of connector-induced information loss—establishing a novel diagnostic paradigm and empirical foundation for modality alignment optimization.
📝 Abstract
Vision--language models (VLMs) often process visual inputs through a pretrained vision encoder, followed by a projection into the language model's embedding space via a connector component. While crucial for modality fusion, the potential information loss induced by this projection step and its direct impact on model capabilities remain understudied. We introduce two complementary approaches to examine and quantify this loss by analyzing the latent representation space. First, we evaluate semantic information preservation by analyzing changes in k-nearest neighbor relationships between image representations, before and after projection. Second, we directly measure information loss by reconstructing visual embeddings from the projected representation, localizing loss at an image patch level. Experiments reveal that connectors substantially distort the local geometry of visual representations, with k-nearest neighbors diverging by 40--60% post-projection, correlating with degradation in retrieval performance. The patch-level embedding reconstruction provides interpretable insights for model behavior on visually grounded question-answering tasks, finding that areas of high information loss reliably predict instances where models struggle.