Peering into the Unknown: Active View Selection with Neural Uncertainty Maps for 3D Reconstruction

๐Ÿ“… 2025-06-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the active viewpoint selection (AVS) problem in 3D reconstruction. We propose a neural uncertainty modeling approach that enables efficient, single-image-driven AVS without iterative optimization or rendering-based inference. Specifically, we design UPNetโ€”a lightweight feedforward network that directly predicts a full-view uncertainty distribution from a single input image. We introduce the novel paradigm of โ€œneural uncertainty map-guided AVSโ€, which supports cross-category zero-shot generalization. Our method seamlessly integrates viewpoint value assessment with neural 3D rendering frameworks (e.g., NeRF and 3D Gaussian Splatting) via multi-view uncertainty aggregation. Experiments demonstrate that our approach achieves baseline reconstruction accuracy using only 50% of the viewpoints, accelerates the AVS stage by 400ร—, and reduces CPU, RAM, and GPU resource consumption by over 50%.

Technology Category

Application Category

๐Ÿ“ Abstract
Some perspectives naturally provide more information than others. How can an AI system determine which viewpoint offers the most valuable insight for accurate and efficient 3D object reconstruction? Active view selection (AVS) for 3D reconstruction remains a fundamental challenge in computer vision. The aim is to identify the minimal set of views that yields the most accurate 3D reconstruction. Instead of learning radiance fields, like NeRF or 3D Gaussian Splatting, from a current observation and computing uncertainty for each candidate viewpoint, we introduce a novel AVS approach guided by neural uncertainty maps predicted by a lightweight feedforward deep neural network, named UPNet. UPNet takes a single input image of a 3D object and outputs a predicted uncertainty map, representing uncertainty values across all possible candidate viewpoints. By leveraging heuristics derived from observing many natural objects and their associated uncertainty patterns, we train UPNet to learn a direct mapping from viewpoint appearance to uncertainty in the underlying volumetric representations. Next, our approach aggregates all previously predicted neural uncertainty maps to suppress redundant candidate viewpoints and effectively select the most informative one. Using these selected viewpoints, we train 3D neural rendering models and evaluate the quality of novel view synthesis against other competitive AVS methods. Remarkably, despite using half of the viewpoints than the upper bound, our method achieves comparable reconstruction accuracy. In addition, it significantly reduces computational overhead during AVS, achieving up to a 400 times speedup along with over 50% reductions in CPU, RAM, and GPU usage compared to baseline methods. Notably, our approach generalizes effectively to AVS tasks involving novel object categories, without requiring any additional training.
Problem

Research questions and friction points this paper is trying to address.

AI system determines most informative viewpoint for 3D reconstruction
Minimizes required views while maximizing reconstruction accuracy
Reduces computational overhead and generalizes to novel objects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural uncertainty maps guide view selection
Lightweight UPNet predicts viewpoint uncertainty
Aggregates uncertainty maps to suppress redundancy
๐Ÿ”Ž Similar Papers
No similar papers found.