Repurposing Geometric Foundation Models for Multi-view Diffusion

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing novel view synthesis methods struggle to ensure geometric consistency due to their reliance on view-agnostic VAE latent spaces. This work proposes the Geometry-aware Latent Diffusion (GLD) framework, which, for the first time, leverages the feature space of a geometry foundation model as the latent representation for multi-view diffusion. GLD achieves high-quality and geometrically consistent image generation without requiring large-scale generative pretraining. The method outperforms both VAE- and RAE-based approaches in terms of 2D image quality and 3D consistency metrics, while offering over 4.4× faster training. Remarkably, its performance rivals that of state-of-the-art methods that depend on extensive text-to-image pretraining.

Technology Category

Application Category

📝 Abstract
While recent advances in generative latent spaces have driven substantial progress in single-image generation, the optimal latent space for novel view synthesis (NVS) remains largely unexplored. In particular, NVS requires geometrically consistent generation across viewpoints, but existing approaches typically operate in a view-independent VAE latent space. In this paper, we propose Geometric Latent Diffusion (GLD), a framework that repurposes the geometrically consistent feature space of geometric foundation models as the latent space for multi-view diffusion. We show that these features not only support high-fidelity RGB reconstruction but also encode strong cross-view geometric correspondences, providing a well-suited latent space for NVS. Our experiments demonstrate that GLD outperforms both VAE and RAE on 2D image quality and 3D consistency metrics, while accelerating training by more than 4.4x compared to the VAE latent space. Notably, GLD remains competitive with state-of-the-art methods that leverage large-scale text-to-image pretraining, despite training its diffusion model from scratch without such generative pretraining.
Problem

Research questions and friction points this paper is trying to address.

novel view synthesis
geometric consistency
latent space
multi-view generation
3D consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometric Latent Diffusion
Novel View Synthesis
Geometric Foundation Models
Multi-view Diffusion
Latent Space
🔎 Similar Papers
No similar papers found.