ArtiScene: Language-Driven Artistic 3D Scene Generation Through Image Intermediary

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual bottlenecks of domain expertise requirements and scarcity of high-quality 3D training data in text-driven 3D scene generation, this paper proposes the first training-free, text-to-artistic-3D-scene synthesis framework. Methodologically, it leverages generative 2D images as semantic and geometric intermediaries: multi-view diffusion models first produce stylistically consistent 2D images; then, joint 2D-to-3D inversion—guided by depth, surface normal, and segmentation maps—combined with differentiable rendering and image-driven 3D assembly optimization enables coherent reconstruction of geometry, pose, and appearance. Its key contribution is establishing a zero-shot “text → 2D (intermediary) → 3D” paradigm that eliminates reliance on 3D supervision while ensuring layout plausibility, stylistic consistency, and artistic expressiveness. Experiments demonstrate significant improvements over state-of-the-art methods in both layout and aesthetic metrics; user studies yield a 74.89% win rate, and automated evaluation by GPT-4o achieves 95.07%.

Technology Category

Application Category

📝 Abstract
Designing 3D scenes is traditionally a challenging task that demands both artistic expertise and proficiency with complex software. Recent advances in text-to-3D generation have greatly simplified this process by letting users create scenes based on simple text descriptions. However, as these methods generally require extra training or in-context learning, their performance is often hindered by the limited availability of high-quality 3D data. In contrast, modern text-to-image models learned from web-scale images can generate scenes with diverse, reliable spatial layouts and consistent, visually appealing styles. Our key insight is that instead of learning directly from 3D scenes, we can leverage generated 2D images as an intermediary to guide 3D synthesis. In light of this, we introduce ArtiScene, a training-free automated pipeline for scene design that integrates the flexibility of free-form text-to-image generation with the diversity and reliability of 2D intermediary layouts. First, we generate 2D images from a scene description, then extract the shape and appearance of objects to create 3D models. These models are assembled into the final scene using geometry, position, and pose information derived from the same intermediary image. Being generalizable to a wide range of scenes and styles, ArtiScene outperforms state-of-the-art benchmarks by a large margin in layout and aesthetic quality by quantitative metrics. It also averages a 74.89% winning rate in extensive user studies and 95.07% in GPT-4o evaluation. Project page: https://artiscene-cvpr.github.io/
Problem

Research questions and friction points this paper is trying to address.

Simplify 3D scene generation using text descriptions
Overcome limited high-quality 3D data with 2D intermediaries
Enhance layout and aesthetic quality in 3D synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages 2D images as 3D synthesis intermediary
Training-free pipeline with text-to-image generation
Extracts shape, appearance, and layout from images
🔎 Similar Papers
No similar papers found.