Materialist: Physically Based Editing Using Single-Image Inverse Rendering

📅 2025-01-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of photorealistic rendering—particularly illumination, shadowing, and transparency control—in single-image material editing. We propose the first single-image inverse physically based rendering framework, which does not require full scene geometry. Our method employs a neural network to predict initial material properties and integrates them with a progressive differentiable renderer grounded in the rendering equation, jointly optimizing environmental lighting and material parameters. The framework enables high-fidelity material editing, object insertion, relighting, and refractive modeling for transparent materials. Quantitatively and qualitatively, it outperforms state-of-the-art single-view approaches—including neural radiance fields (NeRF) and Stable Diffusion—across key metrics: photorealism, global illumination consistency, shadow accuracy, and physical interpretability of inferred parameters.

Technology Category

Application Category

📝 Abstract
To perform image editing based on single-view, inverse physically based rendering, we present a method combining a learning-based approach with progressive differentiable rendering. Given an image, our method leverages neural networks to predict initial material properties. Progressive differentiable rendering is then used to optimize the environment map and refine the material properties with the goal of closely matching the rendered result to the input image. We require only a single image while other inverse rendering methods based on the rendering equation require multiple views. In comparison to single-view methods that rely on neural renderers, our approach achieves more realistic light material interactions, accurate shadows, and global illumination. Furthermore, with optimized material properties and illumination, our method enables a variety of tasks, including physically based material editing, object insertion, and relighting. We also propose a method for material transparency editing that operates effectively without requiring full scene geometry. Compared with methods based on Stable Diffusion, our approach offers stronger interpretability and more realistic light refraction based on empirical results.
Problem

Research questions and friction points this paper is trying to address.

Single Image Editing
Material Rendering
Transparency Adjustment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-image Material Editing
Interactive Lighting Optimization
Transparent Material Adjustment
🔎 Similar Papers
No similar papers found.
Lezhong Wang
Lezhong Wang
Technical University of Denmark
D
D. M. Tran
Technical University of Denmark
Ruiqi Cui
Ruiqi Cui
Technical University of Denmark
Computer graphicsgeometry processing
T
TG Thomson
Technical University of Denmark
M
M. Chandraker
University of California, San Diego
J
J. Frisvad
Technical University of Denmark