VLsI: Verbalized Layers-to-Interactions from Large to Small Vision Language Models

📅 2024-12-02
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) incur prohibitive computational overhead and suffer from unstable knowledge distillation on resource-constrained devices. Method: This paper proposes VLsI, an efficient small-scale VLM (2B/7B parameters), introducing the novel “hierarchical verbalizer”—an intermediate representation enabling fine-grained, layer-wise alignment of inference processes. Our approach integrates sequential layer-wise knowledge distillation, natural-language-space feature mapping, multi-stage interactive alignment, and architecture-agnostic lightweight adaptation—moving beyond conventional output-layer imitation. Contribution/Results: Evaluated on ten mainstream vision-language benchmarks, VLsI-2B and VLsI-7B outperform GPT-4V by +11.0% and +17.4%, respectively, significantly surpassing same-parameter baselines. Crucially, these gains are achieved without model scaling, architectural modification, or explicit multimodal fusion—demonstrating superior efficiency and generalization under strict resource constraints.

Technology Category

Application Category

📝 Abstract
The recent surge in high-quality visual instruction tuning samples from closed-source vision-language models (VLMs) such as GPT-4V has accelerated the release of open-source VLMs across various model sizes. However, scaling VLMs to improve performance using larger models brings significant computational challenges, especially for deployment on resource-constrained devices like mobile platforms and robots. To address this, we propose VLsI: Verbalized Layers-to-Interactions, a new VLM family in 2B and 7B model sizes, which prioritizes efficiency without compromising accuracy. VLsI leverages a unique, layer-wise distillation process, introducing intermediate "verbalizers" that map features from each layer to natural language space, allowing smaller VLMs to flexibly align with the reasoning processes of larger VLMs. This approach mitigates the training instability often encountered in output imitation and goes beyond typical final-layer tuning by aligning the small VLMs’ layer-wise progression with that of the large ones. We validate VLsI across ten challenging vision-language benchmarks, achieving notable performance gains (11.0% for 2B and 17.4% for 7B) over GPT-4V without the need for model scaling, merging, or architectural changes. Project Page.
Problem

Research questions and friction points this paper is trying to address.

Address computational challenges of scaling vision-language models
Enable efficient deployment on resource-constrained devices
Maintain performance without architectural changes or model scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise distillation with intermediate verbalizers for alignment
Flexible reasoning process alignment without architectural changes
Efficient small VLMs matching large model performance gains
🔎 Similar Papers
No similar papers found.