Geometry and Perception Guided Gaussians for Multiview-consistent 3D Generation from a Single Image

๐Ÿ“… 2025-06-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address multi-view inconsistency, geometric detail loss, and unreasonable modeling of occluded regions in single-view 3D object generation, this paper proposes a fine-tuning-free multi-branch Gaussian optimization framework. Methodologically, it integrates perceptual priors from pre-trained 2D diffusion models with geometric priors for initialization, and employs cross-branch mutual enhancement and reprojection constraints to enable joint evolution of geometry and appearance. Our key contribution is the first decoupled yet dynamically interactive modeling of geometric and diffusion priors within the 3D Gaussian splatting spaceโ€”bypassing end-to-end training dependencies. Experiments demonstrate significant improvements over state-of-the-art methods on novel-view synthesis and 3D reconstruction, particularly in multi-view consistency, surface detail fidelity, and plausibility of occluded-region reconstruction.

Technology Category

Application Category

๐Ÿ“ Abstract
Generating realistic 3D objects from single-view images requires natural appearance, 3D consistency, and the ability to capture multiple plausible interpretations of unseen regions. Existing approaches often rely on fine-tuning pretrained 2D diffusion models or directly generating 3D information through fast network inference or 3D Gaussian Splatting, but their results generally suffer from poor multiview consistency and lack geometric detail. To takle these issues, we present a novel method that seamlessly integrates geometry and perception priors without requiring additional model training to reconstruct detailed 3D objects from a single image. Specifically, we train three different Gaussian branches initialized from the geometry prior, perception prior and Gaussian noise, respectively. The geometry prior captures the rough 3D shapes, while the perception prior utilizes the 2D pretrained diffusion model to enhance multiview information. Subsequently, we refine 3D Gaussian branches through mutual interaction between geometry and perception priors, further enhanced by a reprojection-based strategy that enforces depth consistency. Experiments demonstrate the higher-fidelity reconstruction results of our method, outperforming existing methods on novel view synthesis and 3D reconstruction, demonstrating robust and consistent 3D object generation.
Problem

Research questions and friction points this paper is trying to address.

Achieving multiview-consistent 3D generation from single images
Improving geometric detail and multiview consistency in 3D reconstruction
Integrating geometry and perception priors without additional training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates geometry and perception priors without training
Uses three Gaussian branches for shape and detail
Enhances depth consistency with reprojection strategy
๐Ÿ”Ž Similar Papers
No similar papers found.
P
Pufan Li
Wangxuan Institute of Computer Technology, Peking University, Beijing, China
Bi'an Du
Bi'an Du
Peking University
3D Computer VisionGenerative Models for 3D
W
Wei Hu
Wangxuan Institute of Computer Technology, Peking University, Beijing, China