DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image captioning datasets lack fine-grained spatial annotations—such as precise entity localization and spatial relations—and suffer from object density and caption sparsity on high-resolution images. To address this, we introduce the first large-scale, densely grounded image captioning dataset, featuring pixel-level object localization, attribute-level details, and explicit spatial relations across diverse real-world scenes. We propose a three-stage human-in-the-loop annotation pipeline integrating open-world perception, mask-guided fine-grained caption generation, and multimodal large language model–driven automatic refinement. Additionally, we release two specialized vision-language models: one for region-aware caption generation and another for spatial sentence fusion. Experiments demonstrate substantial improvements in visual-language understanding, visual grounding, and region-specific captioning. This work establishes both a foundational benchmark dataset and a scalable technical paradigm for multimodal reasoning in complex, cluttered scenes.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) demonstrate a complex understanding of scenes, benefiting from large-scale and high-quality datasets. Most existing caption datasets lack the ground locations and relations for visual entities. Several grounded caption datasets face the problems of missing detailed descriptions, relations, and massive object descriptions on high-resolution images. To fill this gap for the community, we present DenseWorld-1M, the first massive, detailed, dense grounded caption dataset in the real world. We design a three-stage labeling pipeline, containing open-world perception, detailed object caption generation, and dense caption merging. The first stage obtains entity-level masks and labels. The second stage generates the object-level, detailed captions with the guidance of masks and labels from the first stage. The final stage merges object captions and masks into spatial and relational dense captions. To accelerate the labeling process and improve caption quality, we present two VLM models: the Detailed Region Caption model and the Spatial Caption Merging model. Extensive experiments on various settings, including vision-language understanding, visual grounding, and region caption generation, demonstrate the effectiveness of our DenseWorld-1M dataset and labeling models.
Problem

Research questions and friction points this paper is trying to address.

Lack of grounded locations and relations in existing caption datasets
Missing detailed descriptions and massive object descriptions in high-resolution images
Need for a large-scale dense grounded caption dataset in real-world scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage labeling pipeline for dense captions
Two VLM models for quality caption generation
Detailed object captions with spatial relations
🔎 Similar Papers
No similar papers found.