LVT: Large-Scale Scene Reconstruction via Local View Transformers

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the quadratic computational complexity of global self-attention in large-scale scene reconstruction and novel-view synthesis with Transformers, this paper proposes the Local View Transformer (LVT). LVT restricts attention modeling to geometrically proximal local views and incorporates relative camera-pose-based positional encoding to circumvent global computation bottlenecks. It adopts 3D Gaussian Splatting as a differentiable, high-fidelity scene representation and jointly models view-dependent color and opacity. To our knowledge, LVT is the first method enabling end-to-end, single-pass inference for arbitrary-scale, high-resolution scene reconstruction, supporting high-quality novel-view synthesis and interactive rendering. Experiments demonstrate that LVT significantly improves computational efficiency and scalability while preserving reconstruction accuracy, thereby overcoming key limitations of conventional Transformers in large-scene 3D reconstruction.

Technology Category

Application Category

📝 Abstract
Large transformer models are proving to be a powerful tool for 3D vision and novel view synthesis. However, the standard Transformer's well-known quadratic complexity makes it difficult to scale these methods to large scenes. To address this challenge, we propose the Local View Transformer (LVT), a large-scale scene reconstruction and novel view synthesis architecture that circumvents the need for the quadratic attention operation. Motivated by the insight that spatially nearby views provide more useful signal about the local scene composition than distant views, our model processes all information in a local neighborhood around each view. To attend to tokens in nearby views, we leverage a novel positional encoding that conditions on the relative geometric transformation between the query and nearby views. We decode the output of our model into a 3D Gaussian Splat scene representation that includes both color and opacity view-dependence. Taken together, the Local View Transformer enables reconstruction of arbitrarily large, high-resolution scenes in a single forward pass. See our project page for results and interactive demos https://toobaimt.github.io/lvt/.
Problem

Research questions and friction points this paper is trying to address.

Addresses quadratic complexity limitations in large-scale 3D scene reconstruction
Enables reconstruction of arbitrarily large scenes via local view processing
Solves novel view synthesis for high-resolution scenes using geometric transformations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local View Transformer processes nearby views
Novel geometric positional encoding for relative transformations
Decodes into 3D Gaussian Splat scene representation
🔎 Similar Papers
No similar papers found.