StdGEN: Semantic-Decomposed 3D Character Generation from Single Images

📅 2024-11-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for single-image 3D character reconstruction suffer from limited semantic disentanglement, suboptimal geometric/texture fidelity, and inefficient optimization. Method: We propose the Semantic-aware Large Reconstruction Model (S-LRM) coupled with a differentiable multi-layer semantic surface extraction mechanism. Built upon a Transformer architecture, S-LRM jointly reconstructs geometry, texture, and part-level semantics (body, clothing, hair) in a feed-forward manner, followed by iterative implicit field refinement to enhance surface accuracy. The entire pipeline completes in just three minutes and supports zero-shot inference and independent part editing. Contribution/Results: On 3D anime character generation, S-LRM achieves state-of-the-art performance across multiple metrics—including Chamfer Distance (CD), LPIPS, and semantic segmentation IoU—demonstrating the first end-to-end framework that simultaneously delivers high-fidelity reconstruction, strong semantic disentanglement, and real-time efficiency from a single input image.

Technology Category

Application Category

📝 Abstract
We present StdGEN, an innovative pipeline for generating semantically decomposed high-quality 3D characters from single images, enabling broad applications in virtual reality, gaming, and filmmaking, etc. Unlike previous methods which struggle with limited decomposability, unsatisfactory quality, and long optimization times, StdGEN features decomposability, effectiveness and efficiency; i.e., it generates intricately detailed 3D characters with separated semantic components such as the body, clothes, and hair, in three minutes. At the core of StdGEN is our proposed Semantic-aware Large Reconstruction Model (S-LRM), a transformer-based generalizable model that jointly reconstructs geometry, color and semantics from multi-view images in a feed-forward manner. A differentiable multi-layer semantic surface extraction scheme is introduced to acquire meshes from hybrid implicit fields reconstructed by our S-LRM. Additionally, a specialized efficient multi-view diffusion model and an iterative multi-layer surface refinement module are integrated into the pipeline to facilitate high-quality, decomposable 3D character generation. Extensive experiments demonstrate our state-of-the-art performance in 3D anime character generation, surpassing existing baselines by a significant margin in geometry, texture and decomposability. StdGEN offers ready-to-use semantic-decomposed 3D characters and enables flexible customization for a wide range of applications. Project page: https://stdgen.github.io
Problem

Research questions and friction points this paper is trying to address.

Generates high-quality 3D characters from single images
Enables semantic decomposition of characters into components
Reduces optimization time to three minutes per character
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-aware Large Reconstruction Model (S-LRM)
Differentiable multi-layer semantic surface extraction
Efficient multi-view diffusion model integration
🔎 Similar Papers
No similar papers found.