🤖 AI Summary
Existing diffusion models struggle to jointly model illumination, appearance, and geometry for controllable video generation, resulting in poor inter-frame consistency and weak prompt alignment. This paper introduces the first end-to-end diffusion framework that unifies three geometric–illumination–appearance cues—HDR illumination maps, synthetically relit frames, and 3D point trajectories—to enable high-fidelity, temporally consistent video generation conditioned on text or background. Our method features: (1) joint multimodal cue embedding; (2) HDR video mapping and synthetic relighting for data augmentation; and (3) 3D trajectory-guided spatiotemporal consistency modeling. Experiments demonstrate significant improvements over state-of-the-art methods in illumination realism, geometric coherence, and prompt fidelity, establishing new benchmarks for controllable video synthesis.
📝 Abstract
Although diffusion-based models can generate high-quality and high-resolution video sequences from textual or image inputs, they lack explicit integration of geometric cues when controlling scene lighting and visual appearance across frames. To address this limitation, we propose IllumiCraft, an end-to-end diffusion framework accepting three complementary inputs: (1) high-dynamic-range (HDR) video maps for detailed lighting control; (2) synthetically relit frames with randomized illumination changes (optionally paired with a static background reference image) to provide appearance cues; and (3) 3D point tracks that capture precise 3D geometry information. By integrating the lighting, appearance, and geometry cues within a unified diffusion architecture, IllumiCraft generates temporally coherent videos aligned with user-defined prompts. It supports background-conditioned and text-conditioned video relighting and provides better fidelity than existing controllable video generation methods. Project Page: https://yuanze-lin.me/IllumiCraft_page