ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models

๐Ÿ“… 2024-12-09
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current large multimodal models face significant challenges in generating high-quality visual instruction data, including high computational cost, severe hallucination risks, poor controllability, and limited scalability. To address these issues, we propose ProVisionโ€”the first programmable visual instruction data generation system grounded in scene graphs and symbolic program rules. ProVision synthesizes instruction data through scene graph parsing, a rule-driven program engine, and 38 single- or multi-image instruction generators, ensuring high fidelity, interpretability, traceability, and LLM-free hallucination mitigation. We construct ProVision-10M, a dataset comprising over 10 million diverse, high-quality instruction samples, aligned with Visual Genome and DataComp benchmarks. Instruction fine-tuning on ProVision-10M yields substantial improvements: +7% and +8% accuracy on CVBench 2D/3D tasks, +8% on Mantis-Eval, and an average +1.6% gain for xGen-MM-4B under multi-stage training.

Technology Category

Application Category

๐Ÿ“ Abstract
With the rise of multimodal applications, instruction data has become critical for training multimodal language models capable of understanding complex image-based queries. Existing practices rely on powerful but costly large language models (LLMs) or multimodal language models (MLMs) to produce instruction data. These are often prone to hallucinations, licensing issues and the generation process is often hard to scale and interpret. In this work, we present a programmatic approach that employs scene graphs as symbolic representations of images and human-written programs to systematically synthesize vision-centric instruction data. Our approach ensures the interpretability and controllability of the data generation process and scales efficiently while maintaining factual accuracy. By implementing a suite of 24 single-image, 14 multi-image instruction generators, and a scene graph generation pipeline, we build a scalable, cost-effective system: ProVision which produces diverse question-answer pairs concerning objects, attributes, relations, depth, etc., for any given image. Applied to Visual Genome and DataComp datasets, we generate over 10 million instruction data points, ProVision-10M, and leverage them in both pretraining and instruction tuning stages of MLMs. When adopted in the instruction tuning stage, our single-image instruction data yields up to a 7% improvement on the 2D split and 8% on the 3D split of CVBench, along with a 3% increase in performance on QBench2, RealWorldQA, and MMMU. Our multi-image instruction data leads to an 8% improvement on Mantis-Eval. Incorporation of our data in both pre-training and fine-tuning stages of xGen-MM-4B leads to an averaged improvement of 1.6% across 11 benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Language Models
Visual Training Data
Efficient Generation Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

ProVision
Multi-modal Language Model
Data Generation Efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.