RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of vision-language representation learning for unpaired multimodal interleaved documents (e.g., PDFs, web pages), this paper proposes RealSyn—an end-to-end framework. It introduces a real-data extraction pipeline, a hierarchical semantic retrieval module, and an image-guided text generation component. Crucially, it pioneers a real-synthetic hybrid text construction paradigm and incorporates a semantic-balanced sampling strategy to improve modeling of long-tail concepts. We release three-scale RealSyn datasets: 15M, 30M, and 100M samples. Extensive evaluations demonstrate state-of-the-art performance across downstream tasks—including image-text retrieval, visual question answering (VQA), and cross-modal reasoning—validating RealSyn’s strong scalability and generalization capability.

Technology Category

Application Category

📝 Abstract
After pre-training on extensive image-text pairs, Contrastive Language-Image Pre-training (CLIP) demonstrates promising performance on a wide variety of benchmarks. However, a substantial volume of non-paired data, such as multimodal interleaved documents, remains underutilized for vision-language representation learning. To fully leverage these unpaired documents, we initially establish a Real-World Data Extraction pipeline to extract high-quality images and texts. Then we design a hierarchical retrieval method to efficiently associate each image with multiple semantically relevant realistic texts. To further enhance fine-grained visual information, we propose an image semantic augmented generation module for synthetic text production. Furthermore, we employ a semantic balance sampling strategy to improve dataset diversity, enabling better learning of long-tail concepts. Based on these innovations, we construct RealSyn, a dataset combining realistic and synthetic texts, available in three scales: 15M, 30M, and 100M. Extensive experiments demonstrate that RealSyn effectively advances vision-language representation learning and exhibits strong scalability. Models pre-trained on RealSyn achieve state-of-the-art performance on multiple downstream tasks. To facilitate future research, the RealSyn dataset and pre-trained model weights are released at https://github.com/deepglint/RealSyn.
Problem

Research questions and friction points this paper is trying to address.

Utilize non-paired multimodal interleaved documents
Enhance vision-language representation learning
Create scalable dataset combining realistic and synthetic texts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-World Data Extraction pipeline
hierarchical retrieval method
image semantic augmented generation
🔎 Similar Papers
No similar papers found.