GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement

📅 2024-06-09
🏛️ International Conference on Learning Representations
📈 Citations: 7
Influential: 2
📄 PDF
🤖 AI Summary
To address geometric distortion and inaccurate recovery of complex textures (e.g., text, portraits) in multi-view image-based 3D mesh reconstruction, this paper proposes a geometry-texture co-optimization framework. First, we enhance the LRM architecture with differentiable Dual Contouring to enable full-resolution geometric supervision. Second, we introduce a rendering-driven NeRF fine-tuning mechanism to improve surface detail modeling. Third, we propose a lightweight, instance-aware texture refinement module—achieving high-fidelity texture recovery in just 4 seconds while preserving feed-forward inference speed. Built upon triplane representation and differentiable rendering, our method achieves a PSNR of 29.79 on the GSO dataset—setting a new state-of-the-art—and significantly improves both 2D image fidelity and 3D geometric accuracy. Moreover, it natively supports text- or image-to-3D generation.

Technology Category

Application Category

📝 Abstract
We propose a novel approach for 3D mesh reconstruction from multi-view images. Our method takes inspiration from large reconstruction models like LRM that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images. However, in our method, we introduce several important modifications that allow us to significantly enhance 3D reconstruction quality. First of all, we examine the original LRM architecture and find several shortcomings. Subsequently, we introduce respective modifications to the LRM architecture, which lead to improved multi-view image representation and more computationally efficient training. Second, in order to improve geometry reconstruction and enable supervision at full image resolution, we extract meshes from the NeRF field in a differentiable manner and fine-tune the NeRF model through mesh rendering. These modifications allow us to achieve state-of-the-art performance on both 2D and 3D evaluation metrics, such as a PSNR of 28.67 on Google Scanned Objects (GSO) dataset. Despite these superior results, our feed-forward model still struggles to reconstruct complex textures, such as text and portraits on assets. To address this, we introduce a lightweight per-instance texture refinement procedure. This procedure fine-tunes the triplane representation and the NeRF color estimation model on the mesh surface using the input multi-view images in just 4 seconds. This refinement improves the PSNR to 29.79 and achieves faithful reconstruction of complex textures, such as text. Additionally, our approach enables various downstream applications, including text- or image-to-3D generation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing 3D mesh reconstruction from multi-view images
Improving geometry and texture refinement in 3D models
Achieving high-fidelity reconstruction of complex textures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances LRM architecture for better image representation
Differentiable mesh extraction improves geometry reconstruction
Lightweight texture refinement boosts complex texture fidelity
🔎 Similar Papers
No similar papers found.
P
Peiye Zhuang
Snap Inc.
S
Songfang Han
Snap Inc.
C
Chaoyang Wang
Snap Inc.
Aliaksandr Siarohin
Aliaksandr Siarohin
unitn.it
computer visiondeep learningimage and video generation
J
Jiaxu Zou
Snap Inc.
Michael Vasilkovsky
Michael Vasilkovsky
Snap Inc.
Computer GraphicsGenerative AIWorld Models
V
V. Shakhrai
Snap Inc.
S
Sergey Korolev
Snap Inc.
S
S. Tulyakov
Snap Inc.
Hsin-Ying Lee
Hsin-Ying Lee
stealth mode startup
Computer Vision