Vision-Language-Vision Auto-Encoder: Scalable Knowledge Distillation from Diffusion Models

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models require billions of image-text pairs and millions of GPU hours for training, incurring prohibitive computational and data costs. To address this, we propose a Vision-Language-Vision (VLV) self-encoding framework: we freeze the decoder of a pre-trained text-to-image diffusion model and construct a language representation bottleneck; then, using only unpaired image data, we distill its implicit semantic knowledge—encoded in continuous latent embeddings—into a large language model via continuous embedding distillation. This work is the first to transfer the continuous semantic priors inherent in diffusion models to vision-language generation tasks, eliminating reliance on large-scale aligned image-text corpora. Experiments demonstrate that our method achieves state-of-the-art performance on image captioning—on par with GPT-4o and Gemini 2.0 Flash—while costing under $1,000 in training expenses, substantially reducing both computational overhead and data requirements.

Technology Category

Application Category

📝 Abstract
Building state-of-the-art Vision-Language Models (VLMs) with strong captioning capabilities typically necessitates training on billions of high-quality image-text pairs, requiring millions of GPU hours. This paper introduces the Vision-Language-Vision (VLV) auto-encoder framework, which strategically leverages key pretrained components: a vision encoder, the decoder of a Text-to-Image (T2I) diffusion model, and subsequently, a Large Language Model (LLM). Specifically, we establish an information bottleneck by regularizing the language representation space, achieved through freezing the pretrained T2I diffusion decoder. Our VLV pipeline effectively distills knowledge from the text-conditioned diffusion model using continuous embeddings, demonstrating comprehensive semantic understanding via high-quality reconstructions. Furthermore, by fine-tuning a pretrained LLM to decode the intermediate language representations into detailed descriptions, we construct a state-of-the-art (SoTA) captioner comparable to leading models like GPT-4o and Gemini 2.0 Flash. Our method demonstrates exceptional cost-efficiency and significantly reduces data requirements; by primarily utilizing single-modal images for training and maximizing the utility of existing pretrained models (image encoder, T2I diffusion model, and LLM), it circumvents the need for massive paired image-text datasets, keeping the total training expenditure under $1,000 USD.
Problem

Research questions and friction points this paper is trying to address.

Reducing data needs for Vision-Language Models training
Distilling knowledge from diffusion models efficiently
Achieving cost-effective state-of-the-art captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLV auto-encoder leverages pretrained vision and diffusion models
Knowledge distillation via frozen T2I diffusion decoder
Fine-tunes LLM for captioning with minimal data cost
🔎 Similar Papers
No similar papers found.