Zero-Shot Vision Encoder Grafting via LLM Surrogates

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitively high computational cost of training vision-language models (VLMs) with large language model (LLM) decoders, this paper proposes a “zero-shot grafting” paradigm. It constructs a lightweight proxy model from the shallow layers of the target LLM to pretrain the visual encoder; then, via shared embedding space design and semantic alignment mechanisms, enables seamless, zero-shot transfer of the pretrained visual encoder to the full-scale LLM—without fine-tuning. This approach is the first to support plug-and-play visual encoder migration across LLM scales. Evaluated on multiple benchmarks, it achieves performance on par with end-to-end VLM training using the full LLM decoder, while reducing overall VLM training cost by approximately 45% (with Llama-70B as the decoder). The method significantly enhances training efficiency and scalability for large-scale VLM development.

Technology Category

Application Category

📝 Abstract
Vision language models (VLMs) typically pair a modestly sized vision encoder with a large language model (LLM), e.g., Llama-70B, making the decoder the primary computational burden during training. To reduce costs, a potential promising strategy is to first train the vision encoder using a small language model before transferring it to the large one. We construct small"surrogate models"that share the same embedding space and representation language as the large target LLM by directly inheriting its shallow layers. Vision encoders trained on the surrogate can then be directly transferred to the larger model, a process we call zero-shot grafting -- when plugged directly into the full-size target LLM, the grafted pair surpasses the encoder-surrogate pair and, on some benchmarks, even performs on par with full decoder training with the target LLM. Furthermore, our surrogate training approach reduces overall VLM training costs by ~45% when using Llama-70B as the decoder.
Problem

Research questions and friction points this paper is trying to address.

Reduce VLM training costs via surrogate models
Enable zero-shot vision encoder grafting to LLMs
Maintain performance while cutting computational burden
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses small surrogate LLMs for vision encoder training
Directly inherits target LLM's shallow layers
Reduces VLM training costs by ~45%