WeatherDiffusion: Weather-Guided Diffusion Model for Forward and Inverse Rendering

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the forward/inverse rendering challenges in autonomous driving under complex weather and illumination conditions, this paper introduces the first text-guided diffusion framework tailored for this task. Methodologically: (1) we propose a material-aware attention (MAA) mechanism that explicitly models correspondences between intrinsic images—such as albedo, shading, and surface normals—and image regions, thereby enabling high-fidelity estimation of geometry, material, and lighting properties; (2) we construct WeatherSynthetic/WeatherReal, the first large-scale synthetic and real-world autonomous driving dataset covering diverse weather conditions; (3) we integrate textual conditioning with predicted intrinsic images to achieve controllable, semantics-aware rendering. Extensive experiments demonstrate significant improvements over state-of-the-art methods across multiple benchmarks, with enhanced robustness of downstream tasks—including object detection and semantic segmentation—under adverse weather. This validates the framework’s effectiveness and practicality for vision-based understanding in autonomous driving systems.

Technology Category

Application Category

📝 Abstract
Forward and inverse rendering have emerged as key techniques for enabling understanding and reconstruction in the context of autonomous driving (AD). However, complex weather and illumination pose great challenges to this task. The emergence of large diffusion models has shown promise in achieving reasonable results through learning from 2D priors, but these models are difficult to control and lack robustness. In this paper, we introduce WeatherDiffusion, a diffusion-based framework for forward and inverse rendering on AD scenes with various weather and lighting conditions. Our method enables authentic estimation of material properties, scene geometry, and lighting, and further supports controllable weather and illumination editing through the use of predicted intrinsic maps guided by text descriptions. We observe that different intrinsic maps should correspond to different regions of the original image. Based on this observation, we propose Intrinsic map-aware attention (MAA) to enable high-quality inverse rendering. Additionally, we introduce a synthetic dataset (ie WeatherSynthetic) and a real-world dataset (ie WeatherReal) for forward and inverse rendering on AD scenes with diverse weather and lighting. Extensive experiments show that our WeatherDiffusion outperforms state-of-the-art methods on several benchmarks. Moreover, our method demonstrates significant value in downstream tasks for AD, enhancing the robustness of object detection and image segmentation in challenging weather scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addressing challenges in forward and inverse rendering under complex weather and illumination.
Enabling controllable weather and lighting editing using text-guided intrinsic maps.
Improving robustness of object detection and segmentation in adverse weather conditions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

WeatherDiffusion: diffusion-based forward and inverse rendering
Intrinsic map-aware attention for high-quality inverse rendering
WeatherSynthetic and WeatherReal datasets for diverse conditions
Yixin Zhu
Yixin Zhu
Assistant Professor, Peking University
Computer VisionVisual ReasoningHuman-Robot Teaming
Z
Zuoliang Zhu
Nankai University, China
Miloš Hašan
Miloš Hašan
NVIDIA
Computer Graphics
J
Jian Yang
Nanjing University, China
J
Jin Xie
Nanjing University, China
B
Beibei Wang
Nanjing University, China