TurboPortrait3D: Single-step diffusion-based fast portrait novel-view synthesis

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing single-image portrait 3D reconstruction methods suffer from visual artifacts, geometric detail loss, and poor identity fidelity; meanwhile, state-of-the-art image diffusion models, though yielding high-quality outputs, lack explicit 3D consistency and incur prohibitive inference overhead. This paper proposes a low-latency novel-view synthesis framework based on a feed-forward image-to-3D pipeline. For the first time, it integrates a single-step image-space diffusion model directly into the 3D rendering pipeline, jointly optimizing geometry and texture via noise-aware rendering and conditional diffusion. We adopt a two-stage training strategy—synthetic multi-view pretraining followed by real-image fine-tuning—to jointly enhance 3D awareness and photorealism. Extensive qualitative and quantitative evaluations demonstrate superior performance over SOTA methods in multi-view consistency, identity preservation, and geometric/texture detail fidelity, while maintaining millisecond-level inference speed.

Technology Category

Application Category

📝 Abstract
We introduce TurboPortrait3D: a method for low-latency novel-view synthesis of human portraits. Our approach builds on the observation that existing image-to-3D models for portrait generation, while capable of producing renderable 3D representations, are prone to visual artifacts, often lack of detail, and tend to fail at fully preserving the identity of the subject. On the other hand, image diffusion models excel at generating high-quality images, but besides being computationally expensive, are not grounded in 3D and thus are not directly capable of producing multi-view consistent outputs. In this work, we demonstrate that image-space diffusion models can be used to significantly enhance the quality of existing image-to-avatar methods, while maintaining 3D-awareness and running with low-latency. Our method takes a single frontal image of a subject as input, and applies a feedforward image-to-avatar generation pipeline to obtain an initial 3D representation and corresponding noisy renders. These noisy renders are then fed to a single-step diffusion model which is conditioned on input image(s), and is specifically trained to refine the renders in a multi-view consistent way. Moreover, we introduce a novel effective training strategy that includes pre-training on a large corpus of synthetic multi-view data, followed by fine-tuning on high-quality real images. We demonstrate that our approach both qualitatively and quantitatively outperforms current state-of-the-art for portrait novel-view synthesis, while being efficient in time.
Problem

Research questions and friction points this paper is trying to address.

Enhancing 3D portrait synthesis quality from single images
Reducing artifacts and preserving subject identity in 3D avatars
Achieving multi-view consistency with low-latency diffusion refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-step diffusion model refines noisy 3D renders
Multi-view consistent training enhances portrait identity preservation
Feedforward pipeline with pre-training and fine-tuning strategy
🔎 Similar Papers
No similar papers found.