Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation

📅 2024-10-01
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing feed-forward methods for high-quality 3D content generation from a single image or sparse views are constrained by fixed, limited input views, leading to compromised multi-view consistency and reconstruction fidelity. Method: We propose a two-stage dynamic adaptive framework. In Stage I, a multi-view/video diffusion model generates an arbitrary number of candidate views, which are then filtered by a joint view-quality-and-consistency evaluation module to select a high-fidelity subset. In Stage II, a novel tri-plane Transformer—designed to handle variable-length view sequences—directly regresses 3D Gaussian point clouds. Contribution/Results: Our framework introduces the first “generate-then-filter” co-design paradigm and a view-count-agnostic 3D reconstruction architecture, eliminating rigid viewpoint constraints. It achieves state-of-the-art performance across multiple quantitative metrics and attains a user study win rate exceeding 92%.

Technology Category

Application Category

📝 Abstract
Generating high-quality 3D content from text, single images, or sparse view images remains a challenging task with broad applications. Existing methods typically employ multi-view diffusion models to synthesize multi-view images, followed by a feed-forward process for 3D reconstruction. However, these approaches are often constrained by a small and fixed number of input views, limiting their ability to capture diverse viewpoints and, even worse, leading to suboptimal generation results if the synthesized views are of poor quality. To address these limitations, we propose Flex3D, a novel two-stage framework capable of leveraging an arbitrary number of high-quality input views. The first stage consists of a candidate view generation and curation pipeline. We employ a fine-tuned multi-view image diffusion model and a video diffusion model to generate a pool of candidate views, enabling a rich representation of the target 3D object. Subsequently, a view selection pipeline filters these views based on quality and consistency, ensuring that only the high-quality and reliable views are used for reconstruction. In the second stage, the curated views are fed into a Flexible Reconstruction Model (FlexRM), built upon a transformer architecture that can effectively process an arbitrary number of inputs. FlemRM directly outputs 3D Gaussian points leveraging a tri-plane representation, enabling efficient and detailed 3D generation. Through extensive exploration of design and training strategies, we optimize FlexRM to achieve superior performance in both reconstruction and generation tasks. Our results demonstrate that Flex3D achieves state-of-the-art performance, with a user study winning rate of over 92% in 3D generation tasks when compared to several of the latest feed-forward 3D generative models.
Problem

Research questions and friction points this paper is trying to address.

Generating high-quality 3D content from limited inputs
Overcoming constraints of fixed input view numbers
Improving 3D reconstruction with flexible view processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flexible multi-view diffusion model for diverse input
Quality-consistent view selection for reliable reconstruction
Transformer-based FlexRM for arbitrary input processing
🔎 Similar Papers
No similar papers found.