🤖 AI Summary
Existing Transformer-based feed-forward multi-view 3D Gaussian reconstruction methods suffer from prohibitive computational cost and poor scalability to high-resolution images and large numbers of input views, primarily due to full cross-view attention. This paper proposes IterGauss, an iterative large-scale 3D reconstruction framework. Its core innovations include: (i) decoupling scene representation from input views; (ii) introducing a two-stage attention mechanism—coarse-grained intra-view attention followed by fine-grained cross-view attention—to reduce computational complexity; and (iii) incorporating layer-wise high-resolution feature injection and a compact explicit 3D Gaussian representation. These designs significantly enhance both scalability and reconstruction fidelity. Evaluated on RE10K and DL3DV, IterGauss achieves state-of-the-art reconstruction quality at substantially faster inference speed, with performance consistently improving as the number of input views increases.
📝 Abstract
Feed-forward 3D modeling has emerged as a promising approach for rapid and high-quality 3D reconstruction. In particular, directly generating explicit 3D representations, such as 3D Gaussian splatting, has attracted significant attention due to its fast and high-quality rendering, as well as numerous applications. However, many state-of-the-art methods, primarily based on transformer architectures, suffer from severe scalability issues because they rely on full attention across image tokens from multiple input views, resulting in prohibitive computational costs as the number of views or image resolution increases. Toward a scalable and efficient feed-forward 3D reconstruction, we introduce an iterative Large 3D Reconstruction Model (iLRM) that generates 3D Gaussian representations through an iterative refinement mechanism, guided by three core principles: (1) decoupling the scene representation from input-view images to enable compact 3D representations; (2) decomposing fully-attentional multi-view interactions into a two-stage attention scheme to reduce computational costs; and (3) injecting high-resolution information at every layer to achieve high-fidelity reconstruction. Experimental results on widely used datasets, such as RE10K and DL3DV, demonstrate that iLRM outperforms existing methods in both reconstruction quality and speed. Notably, iLRM exhibits superior scalability, delivering significantly higher reconstruction quality under comparable computational cost by efficiently leveraging a larger number of input views.