A Single Image and Multimodality Is All You Need for Novel View Synthesis

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of single-image novel view synthesis under adverse conditions—such as low texture, poor weather, or severe occlusion—where unreliable monocular depth estimation often leads to geometric inconsistencies and degraded visual quality. To mitigate this, the authors propose a plug-and-play multimodal depth reconstruction framework that integrates extremely sparse radar or LiDAR ranging measurements. By modeling local Gaussian processes in the angular domain, the method efficiently infers dense depth maps while quantifying uncertainty. This reconstructed dense depth serves as a geometric prior to guide diffusion-based image generation, without requiring any modification to the original generative architecture. Evaluated on real-world driving scenarios, the approach significantly outperforms purely vision-based methods, achieving notable improvements in both geometric consistency and visual fidelity.

Technology Category

Application Category

📝 Abstract
Diffusion-based approaches have recently demonstrated strong performance for single-image novel view synthesis by conditioning generative models on geometry inferred from monocular depth estimation. However, in practice, the quality and consistency of the synthesized views are fundamentally limited by the reliability of the underlying depth estimates, which are often fragile under low texture, adverse weather, and occlusion-heavy real-world conditions. In this work, we show that incorporating sparse multimodal range measurements provides a simple yet effective way to overcome these limitations. We introduce a multimodal depth reconstruction framework that leverages extremely sparse range sensing data, such as automotive radar or LiDAR, to produce dense depth maps that serve as robust geometric conditioning for diffusion-based novel view synthesis. Our approach models depth in an angular domain using a localized Gaussian Process formulation, enabling computationally efficient inference while explicitly quantifying uncertainty in regions with limited observations. The reconstructed depth and uncertainty are used as a drop-in replacement for monocular depth estimators in existing diffusion-based rendering pipelines, without modifying the generative model itself. Experiments on real-world multimodal driving scenes demonstrate that replacing vision-only depth with our sparse range-based reconstruction substantially improves both geometric consistency and visual quality in single-image novel-view video generation. These results highlight the importance of reliable geometric priors for diffusion-based view synthesis and demonstrate the practical benefits of multimodal sensing even at extreme levels of sparsity.
Problem

Research questions and friction points this paper is trying to address.

novel view synthesis
monocular depth estimation
geometric consistency
multimodal sensing
diffusion-based generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal sensing
sparse depth reconstruction
diffusion-based novel view synthesis
Gaussian Process
geometric conditioning
🔎 Similar Papers