MARVEL-40M+: Multi-Level Visual Elaboration for High-Fidelity Text-to-3D Content Creation

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-3D generation is hindered by the scarcity of high-quality paired text–3D data. To address this, we introduce the first ultra-large-scale, semantically aligned text–3D dataset—comprising over 40 million multi-level textual annotations across 8.9 million diverse 3D assets. We propose a novel multi-stage automatic annotation pipeline integrating multi-view vision-language models (VLMs) and large language models (LLMs), enhanced with human meta-information to suppress hallucination, and pioneer a dual-granularity annotation paradigm combining fine-grained descriptions with semantic tags. Furthermore, we design FX3D, a lightweight two-stage framework enabling end-to-end textured mesh generation in under 15 seconds. Our dataset achieves superior annotation quality, outperforming existing benchmarks by 72.41% (GPT-4 evaluation) and 73.40% (human evaluation) in pairwise preference tests. FX3D establishes a new trade-off between fidelity and inference speed, advancing practical text-to-3D synthesis.

Technology Category

Application Category

📝 Abstract
Generating high-fidelity 3D content from text prompts remains a significant challenge in computer vision due to the limited size, diversity, and annotation depth of the existing datasets. To address this, we introduce MARVEL-40M+, an extensive dataset with 40 million text annotations for over 8.9 million 3D assets aggregated from seven major 3D datasets. Our contribution is a novel multi-stage annotation pipeline that integrates open-source pretrained multi-view VLMs and LLMs to automatically produce multi-level descriptions, ranging from detailed (150-200 words) to concise semantic tags (10-20 words). This structure supports both fine-grained 3D reconstruction and rapid prototyping. Furthermore, we incorporate human metadata from source datasets into our annotation pipeline to add domain-specific information in our annotation and reduce VLM hallucinations. Additionally, we develop MARVEL-FX3D, a two-stage text-to-3D pipeline. We fine-tune Stable Diffusion with our annotations and use a pretrained image-to-3D network to generate 3D textured meshes within 15s. Extensive evaluations show that MARVEL-40M+ significantly outperforms existing datasets in annotation quality and linguistic diversity, achieving win rates of 72.41% by GPT-4 and 73.40% by human evaluators.
Problem

Research questions and friction points this paper is trying to address.

Limited dataset size and diversity hinder high-fidelity text-to-3D generation.
Automated multi-level annotation pipeline enhances 3D asset descriptions.
Rapid 3D mesh generation with improved linguistic diversity and quality.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-stage annotation pipeline with VLMs and LLMs
Two-stage text-to-3D pipeline with Stable Diffusion
Human metadata integration to reduce VLM hallucinations
🔎 Similar Papers
No similar papers found.
S
Sankalp Sinha
DFKI, RPTU Kaiserslautern-Landau, MindGarage
M
Mohammad Sadil Khan
DFKI, RPTU Kaiserslautern-Landau, MindGarage
M
Muhammad Usama
DFKI, RPTU Kaiserslautern-Landau, MindGarage
S
Shino Sam
DFKI, RPTU Kaiserslautern-Landau, MindGarage
Didier Stricker
Didier Stricker
Professor for Computer Science, University Kaiserslautern
augmented realitycomputer visionimage processingbody sensor networkshci
S
Sk Aziz Ali
BITS Pilani, Hyderabad
Muhammad Zeshan Afzal
Muhammad Zeshan Afzal
Team Lead Multimodal Learning and Perception, DFKI GmbH Germany