GeoMVD: Geometry-Enhanced Multi-View Generation Model Based on Geometric Information Extraction

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-view image generation faces two key challenges: cross-view structural inconsistency and loss of high-resolution texture details. To address these, we propose a geometry-guided diffusion model featuring three core innovations: (1) a shared geometric representation jointly encoding depth maps, surface normal maps, and foreground masks; (2) a multi-view geometric feature extraction module coupled with a decoupled geometric-enhanced attention mechanism; and (3) a dynamic geometry-strength modulation strategy integrated within an iterative refinement framework. Our approach preserves generation efficiency while significantly improving cross-view geometric consistency and high-frequency texture fidelity. Extensive experiments demonstrate superior visual coherence and naturalness over state-of-the-art methods across multiple benchmarks. Moreover, the generated images exhibit strong generalization in downstream 3D reconstruction and VR/AR applications, validating the robustness and practical utility of our geometry-aware diffusion paradigm.

Technology Category

Application Category

📝 Abstract
Multi-view image generation holds significant application value in computer vision, particularly in domains like 3D reconstruction, virtual reality, and augmented reality. Most existing methods, which rely on extending single images, face notable computational challenges in maintaining cross-view consistency and generating high-resolution outputs. To address these issues, we propose the Geometry-guided Multi-View Diffusion Model, which incorporates mechanisms for extracting multi-view geometric information and adjusting the intensity of geometric features to generate images that are both consistent across views and rich in detail. Specifically, we design a multi-view geometry information extraction module that leverages depth maps, normal maps, and foreground segmentation masks to construct a shared geometric structure, ensuring shape and structural consistency across different views. To enhance consistency and detail restoration during generation, we develop a decoupled geometry-enhanced attention mechanism that strengthens feature focus on key geometric details, thereby improving overall image quality and detail preservation. Furthermore, we apply an adaptive learning strategy that fine-tunes the model to better capture spatial relationships and visual coherence between the generated views, ensuring realistic results. Our model also incorporates an iterative refinement process that progressively improves the output quality through multiple stages of image generation. Finally, a dynamic geometry information intensity adjustment mechanism is proposed to adaptively regulate the influence of geometric data, optimizing overall quality while ensuring the naturalness of generated images. More details can be found on the project page: https://github.com/SobeyMIL/GeoMVD.com.
Problem

Research questions and friction points this paper is trying to address.

Addressing cross-view consistency challenges in multi-view image generation
Overcoming computational limitations in high-resolution multi-view output generation
Enhancing geometric detail preservation across different viewpoint images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometry-guided multi-view diffusion model with geometric extraction
Decoupled geometry-enhanced attention mechanism for detail restoration
Adaptive learning strategy with dynamic geometry intensity adjustment
🔎 Similar Papers
No similar papers found.
J
Jiaqi Wu
University of Electronic Science and Technology of China, Chengdu, China
Y
Yaosen Chen
University of Electronic Science and Technology of China, Chengdu, China
Shuyuan Zhu
Shuyuan Zhu
Associate Professor of University of Electronic Science and Technology of China
Signa ProcessingImage/Video Compression