🤖 AI Summary
To address the high image acquisition cost, substantial computational overhead, and difficulty in balancing fidelity and efficiency in large-scale NeRF reconstruction, this paper proposes an uncertainty-guided incremental optimal view selection framework. Our method introduces a novel hybrid uncertainty modeling mechanism—first integrating pixel-level rendering uncertainty with camera-pose-level positional uncertainty—to enable information-gain-driven dynamic incremental view optimization. The framework is architecture-agnostic, jointly optimizing view selection and NeRF training. It achieves high-fidelity reconstruction of large-scale scenes while significantly reducing both the number of required input images and GPU computation. Extensive experiments on multiple real-world large-scale scenes demonstrate the framework’s effectiveness and generalizability across diverse NeRF backbones and scene geometries.
📝 Abstract
Large-scale Neural Radiance Fields (NeRF) reconstructions are typically hindered by the requirement for extensive image datasets and substantial computational resources. This paper introduces IOVS4NeRF, a framework that employs an uncertainty-guided incremental optimal view selection strategy adaptable to various NeRF implementations. Specifically, by leveraging a hybrid uncertainty model that combines rendering and positional uncertainties, the proposed method calculates the most informative view from among the candidates, thereby enabling incremental optimization of scene reconstruction. Our detailed experiments demonstrate that IOVS4NeRF achieves high-fidelity NeRF reconstruction with minimal computational resources, making it suitable for large-scale scene applications.