IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion models struggle to jointly model illumination, appearance, and geometry for controllable video generation, resulting in poor inter-frame consistency and weak prompt alignment. This paper introduces the first end-to-end diffusion framework that unifies three geometric–illumination–appearance cues—HDR illumination maps, synthetically relit frames, and 3D point trajectories—to enable high-fidelity, temporally consistent video generation conditioned on text or background. Our method features: (1) joint multimodal cue embedding; (2) HDR video mapping and synthetic relighting for data augmentation; and (3) 3D trajectory-guided spatiotemporal consistency modeling. Experiments demonstrate significant improvements over state-of-the-art methods in illumination realism, geometric coherence, and prompt fidelity, establishing new benchmarks for controllable video synthesis.

Technology Category

Application Category

📝 Abstract
Although diffusion-based models can generate high-quality and high-resolution video sequences from textual or image inputs, they lack explicit integration of geometric cues when controlling scene lighting and visual appearance across frames. To address this limitation, we propose IllumiCraft, an end-to-end diffusion framework accepting three complementary inputs: (1) high-dynamic-range (HDR) video maps for detailed lighting control; (2) synthetically relit frames with randomized illumination changes (optionally paired with a static background reference image) to provide appearance cues; and (3) 3D point tracks that capture precise 3D geometry information. By integrating the lighting, appearance, and geometry cues within a unified diffusion architecture, IllumiCraft generates temporally coherent videos aligned with user-defined prompts. It supports background-conditioned and text-conditioned video relighting and provides better fidelity than existing controllable video generation methods. Project Page: https://yuanze-lin.me/IllumiCraft_page
Problem

Research questions and friction points this paper is trying to address.

Lack of geometric cues in diffusion-based video generation models
Need for unified control over lighting, appearance, and geometry in videos
Improving fidelity and temporal coherence in controllable video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified diffusion framework with HDR lighting control
Integrates 3D geometry cues via point tracks
Synthetic relighting for appearance-geometry alignment
🔎 Similar Papers
No similar papers found.