AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models suffer from semantic inconsistency in image-text alignment due to nonlinear mappings—e.g., MLPs—from visual features to the text embedding space, which induce distribution shifts and noise sensitivity. To address this, we propose a novel cross-modal alignment paradigm: dynamically projecting visual features into the large language model’s (LLM) text embedding space as a weighted average of LLM token embeddings, explicitly leveraging the LLM’s inherent linguistic priors to enhance semantic consistency—without updating the LLM’s parameters. This is the first work to adopt the LLM’s text embedding weighted average as the explicit visual projection target, achieved via a learnable weighting mechanism that ensures both robustness and interpretability. Our method achieves state-of-the-art performance on multi-document understanding benchmarks, significantly improving alignment quality while demonstrating superior robustness to image noise and layout variations.

Technology Category

Application Category

📝 Abstract
Aligning visual features with language embeddings is a key challenge in vision-language models (VLMs). The performance of such models hinges on having a good connector that maps visual features generated by a vision encoder to a shared embedding space with the LLM while preserving semantic similarity. Existing connectors, such as multilayer perceptrons (MLPs), often produce out-of-distribution or noisy inputs, leading to misalignment between the modalities. In this work, we propose a novel vision-text alignment method, AlignVLM, that maps visual features to a weighted average of LLM text embeddings. Our approach leverages the linguistic priors encoded by the LLM to ensure that visual features are mapped to regions of the space that the LLM can effectively interpret. AlignVLM is particularly effective for document understanding tasks, where scanned document images must be accurately mapped to their textual content. Our extensive experiments show that AlignVLM achieves state-of-the-art performance compared to prior alignment methods. We provide further analysis demonstrating improved vision-text feature alignment and robustness to noise.
Problem

Research questions and friction points this paper is trying to address.

Visual-Semantic Alignment
Transformer Performance
Image-Text Mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

AlignVLM
Visual-Semantic Alignment
Robustness
🔎 Similar Papers
Ahmed Masry
Ahmed Masry
Graduate Student, York University
Natural Language Processing
Juan A. Rodriguez
Juan A. Rodriguez
Mila - Quebec AI Institute, ETS, ServiceNow Research, ILLS
Artificial IntelligenceDeep LearningComputer VisionMultimodal AIScalable Vector Graphics
T
Tianyu Zhang
ServiceNow, Mila, Université de Montréal
Suyuchen Wang
Suyuchen Wang
Université de Montréal / Mila
NLPLLMVLMDeep Learning
C
Chao Wang
ServiceNow
Aarash Feizi
Aarash Feizi
PhD student in Computer Science, McGill University
Representation LearningSelf-Supervised LearningGraph Representation Learning
Akshay Kalkunte Suresh
Akshay Kalkunte Suresh
Applied Research Scientist at ServiceNow
Machine LearningArtificial IntelligenceNatural Language ProcessingComputer VisionSpeech
Abhay Puri
Abhay Puri
Applied Research Scientist, ServiceNow Research
Agent SecurityLarge Language ModelsComputer VisionMultiModal Foundational Models
Xiangru Jian
Xiangru Jian
University of Waterloo
MultimodalityLLMGNNDatabase
Pierre-André Noël
Pierre-André Noël
ServiceNow Research
Machine learninggraphsstochastic processes
S
Sathwik Tejaswi Madhusudhan
ServiceNow
M
Marco Pedersoli
ServiceNow, École de Technologie Supérieure
Bang Liu
Bang Liu
Associate Professor at the University of Montreal, Canada CIFAR AI Chair at Mila
Natural Language ProcessingDeep LearningMachine LearningData Mining
Nicolas Chapados
Nicolas Chapados
ServiceNow Research, Mila, Polytechnique Montréal (adjunct)
Deep LearningArtificial IntelligenceStatisticsForecasting
Yoshua Bengio
Yoshua Bengio
Professor of computer science, University of Montreal, Mila, IVADO, CIFAR
Machine learningdeep learningartificial intelligence
E
Enamul Hoque
York University
C
Christopher Pal
ServiceNow, Mila, Polytechnique Montréal, CIFAR AI Chair
Issam H. Laradji
Issam H. Laradji
Sr Manager Research Scientist at ServiceNow & Adjunct Professor at University of British Columbia
Natural Language ProcessingComputer VisionOptimization
David Vázquez
David Vázquez
ServiceNow Research, ELLIS member
Artificial IntelligenceComputer VisionMultimodal-learningMachine Learning
P
Perouz Taslakian
ServiceNow
Spandana Gella
Spandana Gella
ServiceNow AI Research
Multimodal Foundational ModelsGUI AgentsSafety & Security
Sai Rajeswar
Sai Rajeswar
Staff Research Scientist, Adjunct Professor, Mila, ServiceNow
machine learninggenerative modelsreinforcement learning