Cross-Modal Redundancy and the Geometry of Vision-Language Embeddings

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited understanding of the geometric structure of embedding spaces in current vision-language models, particularly regarding cross-modal alignment mechanisms. We propose the Aligned Sparse Autoencoder (Aligned SAE), which introduces the isometric energy assumption—treating cross-modal redundancy as an inductive bias—to enforce inter-modal energy consistency while preserving reconstruction capability. Our analysis reveals that bimodal atoms encode all alignment signals, whereas unimodal atoms constitute the modality gap; removing the latter eliminates this gap without degrading performance. Furthermore, constraining vector arithmetic operations to the bimodal subspace significantly enhances semantic editing and cross-modal retrieval efficacy.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) align images and text with remarkable success, yet the geometry of their shared embedding space remains poorly understood. To probe this geometry, we begin from the Iso-Energy Assumption, which exploits cross-modal redundancy: a concept that is truly shared should exhibit the same average energy across modalities. We operationalize this assumption with an Aligned Sparse Autoencoder (SAE) that encourages energy consistency during training while preserving reconstruction. We find that this inductive bias changes the SAE solution without harming reconstruction, giving us a representation that serves as a tool for geometric analysis. Sanity checks on controlled data with known ground truth confirm that alignment improves when Iso-Energy holds and remains neutral when it does not. Applied to foundational VLMs, our framework reveals a clear structure with practical consequences: (i) sparse bimodal atoms carry the entire cross-modal alignment signal; (ii) unimodal atoms act as modality-specific biases and fully explain the modality gap; (iii) removing unimodal atoms collapses the gap without harming performance; (iv) restricting vector arithmetic to the bimodal subspace yields in-distribution edits and improved retrieval. These findings suggest that the right inductive bias can both preserve model fidelity and render the latent geometry interpretable and actionable.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
embedding geometry
cross-modal alignment
modality gap
shared representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal redundancy
Iso-Energy Assumption
Aligned Sparse Autoencoder
vision-language embeddings
modality gap
🔎 Similar Papers
No similar papers found.