🤖 AI Summary
Existing novel view synthesis methods often produce images with semantic distortions and poor visual quality under large camera motions. To address this, this work proposes SemanticNVS—a camera-conditioned multi-view diffusion model that integrates a pretrained semantic feature extractor into the synthesis pipeline for the first time. The method introduces a semantic feature warping mechanism and an alternating “understand-and-generate” strategy, which jointly optimizes semantic consistency and photorealism at each denoising step. Evaluated across multiple benchmarks, SemanticNVS achieves substantial performance gains, reducing FID scores by 4.69%–15.26% relative to the strongest baseline. Notably, it generates significantly more semantically coherent and visually realistic results, especially for distant viewpoints.
📝 Abstract
We present SemanticNVS, a camera-conditioned multi-view diffusion model for novel view synthesis (NVS), which improves generation quality and consistency by integrating pre-trained semantic feature extractors. Existing NVS methods perform well for views near the input view, however, they tend to generate semantically implausible and distorted images under long-range camera motion, revealing severe degradation. We speculate that this degradation is due to current models failing to fully understand their conditioning or intermediate generated scene content. Here, we propose to integrate pre-trained semantic feature extractors to incorporate stronger scene semantics as conditioning to achieve high-quality generation even at distant viewpoints. We investigate two different strategies, (1) warped semantic features and (2) an alternating scheme of understanding and generation at each denoising step. Experimental results on multiple datasets demonstrate the clear qualitative and quantitative (4.69%-15.26% in FID) improvement over state-of-the-art alternatives.