🤖 AI Summary
To address challenges—including detail loss, background leakage, and geometric inconsistency—in close-range 3D scene reconstruction and novel view synthesis from sparse input views, this paper proposes a hierarchical reconstruction framework based on point-cloud-conditioned diffusion models. Methodologically, we design an occlusion-aware noise suppression strategy and a global structural guidance mechanism, leveraging dense multi-view point clouds as geometric priors to constrain the diffusion process; we further introduce pixel-level mapping optimization and temporal point cloud fusion to ensure spatiotemporal consistency. Crucially, we explicitly embed geometric structure into the diffusion modeling, enabling fine-grained texture recovery and holistic structural integrity under sparse conditioning. Evaluated on multiple close-range datasets, our method achieves state-of-the-art performance in PSNR, LPIPS, and visual quality, demonstrating superior fidelity in both geometric coherence and textural detail.
📝 Abstract
Reconstructing 3D scenes and synthesizing novel views from sparse input views is a highly challenging task. Recent advances in video diffusion models have demonstrated strong temporal reasoning capabilities, making them a promising tool for enhancing reconstruction quality under sparse-view settings. However, existing approaches are primarily designed for modest viewpoint variations, which struggle in capturing fine-grained details in close-up scenarios since input information is severely limited. In this paper, we present a diffusion-based framework, called CloseUpShot, for close-up novel view synthesis from sparse inputs via point-conditioned video diffusion. Specifically, we observe that pixel-warping conditioning suffers from severe sparsity and background leakage in close-up settings. To address this, we propose hierarchical warping and occlusion-aware noise suppression, enhancing the quality and completeness of the conditioning images for the video diffusion model. Furthermore, we introduce global structure guidance, which leverages a dense fused point cloud to provide consistent geometric context to the diffusion process, to compensate for the lack of globally consistent 3D constraints in sparse conditioning inputs. Extensive experiments on multiple datasets demonstrate that our method outperforms existing approaches, especially in close-up novel view synthesis, clearly validating the effectiveness of our design.