R3-RECON: Radiance-Field-Free Active Reconstruction via Renderability

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently selecting the next-best view in active reconstruction to enhance novel-view rendering quality. The authors propose a lightweight framework that does not rely on radiance fields or any specific scene representation. By implicitly modeling a renderability field over SE(3) space using a voxel map, and integrating online observation statistics with a closed-form scoring mechanism, the method enables rapid evaluation of rendering quality for arbitrary candidate views. To accelerate 360-degree utility computation, the approach incorporates panoramic expansion and is compatible with reconstruction pipelines such as 3D Gaussian Splatting. Evaluated on the Replica indoor dataset, the method significantly improves rendering uniformity and 3D reconstruction accuracy compared to existing active Gaussian Splatting approaches under identical viewpoint and time budgets.

Technology Category

Application Category

📝 Abstract
In active reconstruction, an embodied agent must decide where to look next to efficiently acquire views that support high-quality novel-view rendering. Recent work on active view planning for neural rendering largely derives next-best-view (NBV) criteria by backpropagating through radiance fields or estimating information entropy over 3D Gaussian primitives. While effective, these strategies tightly couple view selection to heavy, representation-specific mechanisms and fail to account for the computational and resource constraints required for lightweight online deployment. In this paper, we revisit active reconstruction from a renderability-centric perspective. We propose $\mathbb{R}^{3}$-RECON, a radiance-fields-free active reconstruction framework that induces an implicit, pose-conditioned renderability field over SE(3) from a lightweight voxel map. Our formulation aggregates per-voxel online observation statistics into a unified scalar renderability score that is cheap to update and can be queried in closed form at arbitrary candidate viewpoints in milliseconds, without requiring gradients or radiance-field training. This renderability field is strongly correlated with image-space reconstruction error, naturally guiding NBV selection. We further introduce a panoramic extension that estimates omnidirectional (360$^\circ$) view utility to accelerate candidate evaluation. In the standard indoor Replica dataset, $\mathbb{R}^{3}$-RECON achieves more uniform novel-view quality and higher 3D Gaussian splatting (3DGS) reconstruction accuracy than recent active GS baselines with matched view and time budgets.
Problem

Research questions and friction points this paper is trying to address.

active reconstruction
next-best-view
renderability
radiance-field-free
lightweight deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

renderability
radiance-field-free
active reconstruction
next-best-view
3D Gaussian splatting
🔎 Similar Papers
No similar papers found.