GaussianLens: Localized High-Resolution Reconstruction via On-Demand Gaussian Densification

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address computational redundancy caused by global high-resolution rendering in 3D Gaussian Splatting (3DGS) reconstruction, this paper introduces the first *on-demand local high-resolution reconstruction* task: enhancing geometric and textural fidelity exclusively within a user-specified Region of Interest (RoI), leveraging sparse high-resolution multi-view observations. Methodologically, we propose a pixel-guided feed-forward densification network that jointly fuses the initial 3D Gaussian distribution with multi-view image features to adaptively densify and optimize Gaussian ellipsoids solely inside the RoI. Our core contribution is the decoupling of high-resolution reconstruction from a global optimization into a local, instantaneous, and generalizable pixel-level control process. Experiments on 1024×1024 images demonstrate significant improvements in local geometric and texture fidelity, a 3.2× speedup in inference time, and a 57% reduction in GPU memory consumption—achieving an effective balance among accuracy, efficiency, and interactivity.

Technology Category

Application Category

📝 Abstract
We perceive our surroundings with an active focus, paying more attention to regions of interest, such as the shelf labels in a grocery store. When it comes to scene reconstruction, this human perception trait calls for spatially varying degrees of detail ready for closer inspection in critical regions, preferably reconstructed on demand. While recent works in 3D Gaussian Splatting (3DGS) achieve fast, generalizable reconstruction from sparse views, their uniform resolution output leads to high computational costs unscalable to high-resolution training. As a result, they cannot leverage available images at their original high resolution to reconstruct details. Per-scene optimization methods reconstruct finer details with adaptive density control, yet require dense observations and lengthy offline optimization. To bridge the gap between the prohibitive cost of high-resolution holistic reconstructions and the user needs for localized fine details, we propose the problem of localized high-resolution reconstruction via on-demand Gaussian densification. Given a low-resolution 3DGS reconstruction, the goal is to learn a generalizable network that densifies the initial 3DGS to capture fine details in a user-specified local region of interest (RoI), based on sparse high-resolution observations of the RoI. This formulation avoids the high cost and redundancy of uniformly high-resolution reconstructions and fully leverages high-resolution captures in critical regions. We propose GaussianLens, a feed-forward densification framework that fuses multi-modal information from the initial 3DGS and multi-view images. We further design a pixel-guided densification mechanism that effectively captures details under large resolution increases. Experiments demonstrate our method's superior performance in local fine detail reconstruction and strong scalability to images of up to $1024 imes1024$ resolution.
Problem

Research questions and friction points this paper is trying to address.

Achieving localized high-resolution reconstruction via on-demand Gaussian densification
Bridging cost of holistic reconstructions with user needs for fine details
Learning generalizable network to densify 3DGS in specified regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-demand Gaussian densification for localized reconstruction
Feed-forward framework fusing multi-modal 3DGS data
Pixel-guided mechanism enabling high-resolution detail capture
🔎 Similar Papers
No similar papers found.