RealMat: Realistic Materials with Diffusion and Reinforcement Learning

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing material generation models rely heavily on synthetic data, resulting in a substantial visual gap between generated outputs and real-world materials. Although recent efforts incorporate limited real-world flash-lit images, they suffer from insufficient scale and diversity. This paper introduces reinforcement learning (RL) into material generation for the first time, leveraging a large-scale dataset of real material images captured under natural illumination to construct a perception-driven realism reward function that guides diffusion model optimization. Our method builds upon Stable Diffusion XL, employs fine-tuning with 2×2 tiled material maps, and jointly optimizes text-to-image priors and RL objectives. Experiments demonstrate significant improvements in perceptual realism, fine-grained detail fidelity, and cross-material diversity, consistently outperforming state-of-the-art baselines across multiple quantitative and qualitative metrics.

Technology Category

Application Category

📝 Abstract
Generative models for high-quality materials are particularly desirable to make 3D content authoring more accessible. However, the majority of material generation methods are trained on synthetic data. Synthetic data provides precise supervision for material maps, which is convenient but also tends to create a significant visual gap with real-world materials. Alternatively, recent work used a small dataset of real flash photographs to guarantee realism, however such data is limited in scale and diversity. To address these limitations, we propose RealMat, a diffusion-based material generator that leverages realistic priors, including a text-to-image model and a dataset of realistic material photos under natural lighting. In RealMat, we first finetune a pretrained Stable Diffusion XL (SDXL) with synthetic material maps arranged in $2 imes 2$ grids. This way, our model inherits some realism of SDXL while learning the data distribution of the synthetic material grids. Still, this creates a realism gap, with some generated materials appearing synthetic. We propose to further finetune our model through reinforcement learning (RL), encouraging the generation of realistic materials. We develop a realism reward function for any material image under natural lighting, by collecting a large-scale dataset of realistic material images. We show that this approach increases generated materials' realism compared to our base model and related work.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic materials from synthetic data
Bridging visual gap between synthetic and real materials
Enhancing material realism using diffusion and reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned Stable Diffusion XL with synthetic grids
Reinforcement learning for enhanced realism
Realism reward from large-scale natural image dataset
🔎 Similar Papers
No similar papers found.