RnG: A Unified Transformer for Complete 3D Modeling from Partial Observations

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental challenge in computer vision of reconstructing complete and coherent 3D structures from limited 2D observations. The authors propose RnG, a unified feed-forward Transformer model that jointly tackles 3D reconstruction and novel view synthesis by predicting implicit 3D representations of full geometry and appearance, enabling efficient rendering of RGBD images from arbitrary viewpoints. The key innovation lies in a reconstruction-guided causal attention mechanism that decouples observed and unobserved regions within the attention layers, while leveraging KV caching as an implicit 3D representation to achieve high-quality novel view generation. RnG achieves state-of-the-art performance on both general 3D reconstruction and novel view synthesis benchmarks, demonstrating high fidelity and real-time inference efficiency.

Technology Category

Application Category

📝 Abstract
Human perceive the 3D world through 2D observations from limited viewpoints. While recent feed-forward generalizable 3D reconstruction models excel at recovering 3D structures from sparse images, their representations are often confined to observed regions, leaving unseen geometry un-modeled. This raises a key, fundamental challenge: Can we infer a complete 3D structure from partial 2D observations? We present RnG (Reconstruction and Generation), a novel feed-forward Transformer that unifies these two tasks by predicting an implicit, complete 3D representation. At the core of RnG, we propose a reconstruction-guided causal attention mechanism that separates reconstruction and generation at the attention level, and treats the KV-cache as an implicit 3D representation. Then, arbitrary poses can efficiently query this cache to render high-fidelity, novel-view RGBD outputs. As a result, RnG not only accurately reconstructs visible geometry but also generates plausible, coherent unseen geometry and appearance. Our method achieves state-of-the-art performance in both generalizable 3D reconstruction and novel view generation, while operating efficiently enough for real-time interactive applications. Project page: https://npucvr.github.io/RnG
Problem

Research questions and friction points this paper is trying to address.

3D reconstruction
partial observations
complete 3D modeling
novel view synthesis
unseen geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer
3D reconstruction
novel view synthesis
implicit representation
causal attention
🔎 Similar Papers
No similar papers found.
Mochu Xiang
Mochu Xiang
Northwestern Polytechnical University
Monocular Depth Estimation
Z
Zhelun Shen
Baidu Inc., China
X
Xuesong Li
Australian National University, Australia; CSIRO, Australia
J
Jiahui Ren
Northwestern Polytechnical University, China
J
Jing Zhang
Australian National University, Australia
C
Chen Zhao
Baidu Inc., China
S
Shanshan Liu
Baidu Inc., China
Haocheng Feng
Haocheng Feng
Baidu
computer vision
J
Jingdong Wang
Baidu Inc., China
Y
Yuchao Dai
Northwestern Polytechnical University, China