MOVIS: Enhancing Multi-Object Novel View Synthesis for Indoor Scenes

📅 2024-12-16
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address cross-view inconsistency—such as object misalignment, shape distortion, and appearance artifacts—in novel view synthesis for indoor multi-object scenes, this paper proposes a structure-aware enhanced view-conditioned diffusion model. Methodologically, it is the first to incorporate depth maps and instance masks as structural priors into the diffusion process; introduces a mask prediction auxiliary task; and designs a structure-guided adaptive timestep sampling strategy to jointly optimize image generation and geometric consistency. Experiments on both synthetic and real-world indoor datasets demonstrate significant improvements in cross-view geometric consistency and object localization accuracy. The method achieves state-of-the-art performance across key metrics: image fidelity (FID, LPIPS) and geometric plausibility (Chamfer Distance, Mask IoU), consistently outperforming single-object view synthesis approaches.

Technology Category

Application Category

📝 Abstract
Repurposing pre-trained diffusion models has been proven to be effective for NVS. However, these methods are mostly limited to a single object; directly applying such methods to compositional multi-object scenarios yields inferior results, especially incorrect object placement and inconsistent shape and appearance under novel views. How to enhance and systematically evaluate the cross-view consistency of such models remains under-explored. To address this issue, we propose MOVIS to enhance the structural awareness of the view-conditioned diffusion model for multi-object NVS in terms of model inputs, auxiliary tasks, and training strategy. First, we inject structure-aware features, including depth and object mask, into the denoising U-Net to enhance the model's comprehension of object instances and their spatial relationships. Second, we introduce an auxiliary task requiring the model to simultaneously predict novel view object masks, further improving the model's capability in differentiating and placing objects. Finally, we conduct an in-depth analysis of the diffusion sampling process and carefully devise a structure-guided timestep sampling scheduler during training, which balances the learning of global object placement and fine-grained detail recovery. To systematically evaluate the plausibility of synthesized images, we propose to assess cross-view consistency and novel view object placement alongside existing image-level NVS metrics. Extensive experiments on challenging synthetic and realistic datasets demonstrate that our method exhibits strong generalization capabilities and produces consistent novel view synthesis, highlighting its potential to guide future 3D-aware multi-object NVS tasks. Our project page is available at https://jason-aplp.github.io/MOVIS/.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multi-object novel view synthesis for indoor scenes
Improving cross-view consistency in multi-object scenarios
Systematically evaluating synthesized image plausibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inject depth and object mask into U-Net
Introduce novel view object mask prediction
Structure-guided timestep sampling scheduler
🔎 Similar Papers
No similar papers found.
Ruijie Lu
Ruijie Lu
Peking University
computer vision
Y
Yixin Chen
State Key Laboratory of General Artificial Intelligence, BIGAI
Junfeng Ni
Junfeng Ni
Tsinghua University
Computer Vision3D Reconstruction
Baoxiong Jia
Baoxiong Jia
Ph.D. in Computer Science, UCLA
Computer VisionArtificial Intelligence
Y
Yu Liu
State Key Laboratory of General Artificial Intelligence, BIGAI, Tsinghua University
Diwen Wan
Diwen Wan
AIRCAS,PKU
Computer Vision
Gang Zeng
Gang Zeng
Peking University
Computer VisionPattern RecognitionComputer Graphics
S
Siyuan Huang
State Key Laboratory of General Artificial Intelligence, BIGAI