SynthVLM: High-Efficiency and High-Quality Synthetic Data for Vision Language Models

📅 2024-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) rely heavily on large-scale real image-text pairs, encountering bottlenecks including inefficient data acquisition, inconsistent quality, and privacy risks. To address this, we propose SynthVLM, the first framework adopting a “text-to-image reverse synthesis” paradigm: leveraging advanced diffusion models (e.g., SDXL), it generates high-fidelity, semantically aligned synthetic image-text pairs. We construct SynthVLM-100K—the first 100K-scale benchmark rigorously validated by both human annotators and automated models—and design a hybrid synthetic data distillation pipeline integrating multi-stage automated filtering with human verification, alongside an end-to-end multimodal large language model (MLLM) pretraining framework. Experiments demonstrate that SynthVLM-100K outperforms comparable real-world datasets on VQA; its derived models, SynthVLM-7B/13B, surpass LLaVA using only 82% of its pretraining data and achieve state-of-the-art performance on MMLU, confirming that high-quality synthetic data effectively preserves linguistic understanding and cross-modal generalization capabilities.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have recently emerged, demonstrating remarkable vision-understanding capabilities. However, training these models requires large-scale datasets, which brings challenges related to efficiency, effectiveness, quality, and privacy of web data. In this paper, we introduce SynthVLM, a novel data synthesis and curation method for generating image-caption pairs. Unlike traditional methods, where captions are generated from images, SynthVLM utilizes advanced diffusion models and high-quality captions to automatically synthesize and select high-resolution images from text descriptions, thereby creating precisely aligned image-text pairs. To demonstrate the power of SynthVLM, we introduce SynthVLM-100K, a high-quality dataset consisting of 100,000 curated and synthesized image-caption pairs. In both model and human evaluations, SynthVLM-100K outperforms traditional real-world datasets. Leveraging this dataset, we develop a new family of multimodal large language models (MLLMs), SynthVLM-7B and SynthVLM-13B, which achieve state-of-the-art (SOTA) performance on various vision question-answering (VQA) tasks. Notably, our models outperform LLaVA across most metrics with only 18% pretrain data. Furthermore, SynthVLM-7B and SynthVLM-13B attain SOTA performance on the MMLU benchmark, demonstrating that the high-quality SynthVLM-100K dataset preserves language abilities. To facilitate future research, our dataset and the complete data generating and curating methods are open-sourced at https://github.com/starriver030515/SynthVLM.
Problem

Research questions and friction points this paper is trying to address.

Enhance data efficiency for Vision-Language Models
Generate high-quality synthetic image-caption pairs
Improve multimodal large language models performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthesizes image-caption pairs efficiently
Utilizes diffusion models for high-resolution images
Develops multimodal large language models effectively
🔎 Similar Papers
No similar papers found.
Z
Zheng Liu
Peking University, Shanghai AI Laboratory
H
Hao Liang
Peking University, Shanghai AI Laboratory
W
Wentao Xiong
Shanghai AI Laboratory
Qinhan Yu
Qinhan Yu
Peking University
data-centric AIRAG.
Conghui He
Conghui He
Shanghai AI Laboratory
Data-centric AILLMDocument Intelligence
B
Bin Cui
Peking University, Shanghai AI Laboratory
Wentao Zhang
Wentao Zhang
Institute of Physics, Chinese Academy of Sciences
photoemissionsuperconductivitycupratehtsctime-resolved