🤖 AI Summary
This work addresses the fundamental challenge in computer vision of reconstructing complete and coherent 3D structures from limited 2D observations. The authors propose RnG, a unified feed-forward Transformer model that jointly tackles 3D reconstruction and novel view synthesis by predicting implicit 3D representations of full geometry and appearance, enabling efficient rendering of RGBD images from arbitrary viewpoints. The key innovation lies in a reconstruction-guided causal attention mechanism that decouples observed and unobserved regions within the attention layers, while leveraging KV caching as an implicit 3D representation to achieve high-quality novel view generation. RnG achieves state-of-the-art performance on both general 3D reconstruction and novel view synthesis benchmarks, demonstrating high fidelity and real-time inference efficiency.
📝 Abstract
Human perceive the 3D world through 2D observations from limited viewpoints. While recent feed-forward generalizable 3D reconstruction models excel at recovering 3D structures from sparse images, their representations are often confined to observed regions, leaving unseen geometry un-modeled. This raises a key, fundamental challenge: Can we infer a complete 3D structure from partial 2D observations? We present RnG (Reconstruction and Generation), a novel feed-forward Transformer that unifies these two tasks by predicting an implicit, complete 3D representation. At the core of RnG, we propose a reconstruction-guided causal attention mechanism that separates reconstruction and generation at the attention level, and treats the KV-cache as an implicit 3D representation. Then, arbitrary poses can efficiently query this cache to render high-fidelity, novel-view RGBD outputs. As a result, RnG not only accurately reconstructs visible geometry but also generates plausible, coherent unseen geometry and appearance. Our method achieves state-of-the-art performance in both generalizable 3D reconstruction and novel view generation, while operating efficiently enough for real-time interactive applications. Project page: https://npucvr.github.io/RnG