Improved Alignment of Modalities in Large Vision Language Models

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large-scale vision-language models (VLMs) typically require enormous architectures and massive datasets to achieve unified alignment across diverse multimodal tasks (e.g., image captioning, VQA), limiting accessibility and efficiency. Method: This paper proposes a four-stage autoregressive training framework enabling native visual understanding in lightweight language models. It introduces a phased modality alignment strategy—specifically removing attention masks over visual inputs, incorporating AI-synthesized data, and strengthening alignment during pretraining—to significantly improve training efficiency. The framework is built upon a Transformer architecture enhanced with customized attention mechanisms, SDPA acceleration, and FP16 mixed-precision training. Contribution/Results: The method achieves zero-shot cross-domain transfer (e.g., to PathVQA) and surpasses the 13B-parameter VILA on COCO and Flickr30k in CIDEr with a substantially smaller model and dataset—matching GIT-2’s performance while requiring only 12 hours of end-to-end training.

Technology Category

Application Category

📝 Abstract
Recent advancements in vision-language models have achieved remarkable results in making language models understand vision inputs. However, a unified approach to align these models across diverse tasks such as image captioning and visual question answering remains a challenge. Existing methods either require very big language models or very big datasets which is not efficient in utilizing existing models. This paper addresses this gap and devises a training strategy of auto-regressive vision-language models, to unify vision-language tasks like image-captioning and visual question answering. We propose four training stages for aligning the vision model with the language model, in other words, the language model is given an ability to process visual inputs. We also devise different attention masks for training transformer-based language models that improve the quality of visual features. Further, we introduce some findings, 1) the attention mask should not be applied on visual inputs, 2) the Language model converges faster on AI- generated data, 3) More work should be done in the alignment stage during the pre-training of the model, 4) the model can easily adapt to any downstream tasks like visual question answering on healthcare datasets like PathVQA. After training the model for one epoch for all the stages, it outperforms large models like VILA-13 billion models on common benchmarks like CIDEr scores on COCO and Flickr30k datasets and achieves very close scores to GIT-2 on the same dataset despite being a much smaller model trained on a much smaller dataset. All of the training is done using best practices available like multi- GPU parallel training, lower-precision training with 16-bit float numbers, faster attention (SDPA), and gradient accumulation, and completed the training within 12 hours.
Problem

Research questions and friction points this paper is trying to address.

Unified approach for aligning vision-language models across diverse tasks
Efficient training strategy for auto-regressive vision-language models
Improved visual feature quality via attention mask design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auto-regressive training strategy for vision-language alignment
Custom attention masks for transformer-based language models
Multi-stage training with efficient GPU and precision techniques
🔎 Similar Papers
No similar papers found.
K
Kartik Jangra
Netaji Subhas University of Technology
A
Aman Kumar Singh
Netaji Subhas University of Technology
Y
Yashwani Mann
Netaji Subhas University of Technology
Geetanjali Rathee
Geetanjali Rathee
Assistant Professor, Netaji Subhash University of Technology, Dwarka, New Delhi
Blockchain TechnologyIndustry 4.0Cognitive Radio NetworkIoTFog Computing