MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors

๐Ÿ“… 2024-09-23
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 3
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In inverse rendering, inaccurate decoupling of albedo and material properties leads to geometric and photometric distortions when relighting scenes under novel illumination. To address this, we propose an enhanced 3D inverse rendering framework that integrates a 2D material diffusion prior. Our contributions are threefold: (1) We introduce StableMaterial, the first diffusion model explicitly trained to capture diverse material distributions in 2D; (2) We construct BlenderVaultโ€”the first large-scale synthetic material dataset comprising ~12K geometrically and texturally diverse objects, rendered with physically based shading and controllable lighting; (3) We pioneer the use of Score Distillation Sampling (SDS) for material optimization, significantly improving illumination generalization. Our method jointly leverages multi-view geometry reconstruction and Blender-based differentiable synthesis, including relighting. Evaluations across four synthetic and real-world benchmarks demonstrate consistent improvements in novel-illumination relighting quality (PSNR โ†‘1.8โ€“3.2 dB). BlenderVault is publicly released to advance research in inverse and neural rendering.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent works in inverse rendering have shown promise in using multi-view images of an object to recover shape, albedo, and materials. However, the recovered components often fail to render accurately under new lighting conditions due to the intrinsic challenge of disentangling albedo and material properties from input images. To address this challenge, we introduce MaterialFusion, an enhanced conventional 3D inverse rendering pipeline that incorporates a 2D prior on texture and material properties. We present StableMaterial, a 2D diffusion model prior that refines multi-lit data to estimate the most likely albedo and material from given input appearances. This model is trained on albedo, material, and relit image data derived from a curated dataset of approximately ~12K artist-designed synthetic Blender objects called BlenderVault. we incorporate this diffusion prior with an inverse rendering framework where we use score distillation sampling (SDS) to guide the optimization of the albedo and materials, improving relighting performance in comparison with previous work. We validate MaterialFusion's relighting performance on 4 datasets of synthetic and real objects under diverse illumination conditions, showing our diffusion-aided approach significantly improves the appearance of reconstructed objects under novel lighting conditions. We intend to publicly release our BlenderVault dataset to support further research in this field.
Problem

Research questions and friction points this paper is trying to address.

Disentangling albedo and material properties from images
Improving relighting accuracy under new conditions
Enhancing inverse rendering with diffusion priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates 2D diffusion prior for material refinement
Uses score distillation sampling for optimization
Leverages synthetic dataset for training model
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yehonathan Litman
Carnegie Mellon University
Or Patashnik
Or Patashnik
Tel Aviv University
K
Kangle Deng
Carnegie Mellon University
Aviral Agrawal
Aviral Agrawal
Carnegie Mellon University
R
Rushikesh Zawar
Carnegie Mellon University
F
Fernando de la Torre
Carnegie Mellon University
Shubham Tulsiani
Shubham Tulsiani
Carnegie Mellon Univesity
Computer Vision