GenRecal: Generation after Recalibration from Large to Small Vision-Language Models

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Knowledge distillation for vision-language models (VLMs) faces significant challenges in cross-architecture transfer due to architectural heterogeneity of base LLMs and discrepancies in tokenization schemes. Method: This paper proposes GenRecal, a general-purpose distillation framework centered on a novel learnable Recalibrator module that dynamically aligns heterogeneous VLM feature spaces and adapts representations, jointly optimized via vocabulary-space mapping and generative objectives. Contribution/Results: GenRecal is architecture- and tokenizer-agnostic, markedly enhancing the knowledge absorption capacity of compact student models. On multiple vision-language understanding and generation benchmarks—including VQAv2, OK-VQA, TextVQA, and RefCOCO+—distilled small models consistently outperform proprietary large models such as GPT-4V. These results demonstrate GenRecal’s effectiveness in enabling efficient deployment of high-performance VLMs on resource-constrained devices.

Technology Category

Application Category

📝 Abstract
Recent advancements in vision-language models (VLMs) have leveraged large language models (LLMs) to achieve performance on par with closed-source systems like GPT-4V. However, deploying these models in real-world scenarios, particularly on resource-constrained devices, remains challenging due to their substantial computational demands. This has spurred interest in distilling knowledge from large VLMs into smaller, more efficient counterparts. A key challenge arises here from the diversity of VLM architectures, which are built on different LLMs and employ varying token types-differing in vocabulary size, token splits, and token index ordering. To address this challenge of limitation to a specific VLM type, we present Generation after Recalibration (GenRecal), a novel, general-purpose distillation framework for VLMs. GenRecal incorporates a Recalibrator that aligns and adapts feature representations between heterogeneous VLMs, enabling effective knowledge transfer across different types of VLMs. Through extensive experiments on multiple challenging benchmarks, we demonstrate that GenRecal significantly improves baseline performances, eventually outperforming large-scale open- and closed-source VLMs.
Problem

Research questions and friction points this paper is trying to address.

Distilling knowledge from large to small vision-language models
Aligning feature representations across diverse VLM architectures
Enabling efficient deployment on resource-constrained devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills knowledge from large to small VLMs
Recalibrator aligns heterogeneous VLM features
General-purpose framework for diverse VLM architectures
🔎 Similar Papers
No similar papers found.