🤖 AI Summary
To address insufficient spatiotemporal consistency in multi-view video generation, this paper proposes a noise decomposition and collaboration framework. Methodologically, it introduces multi-level noise disentanglement—separating scene-level noise to capture global dynamics and instance-level noise to model foreground motion discrepancies across views—and designs a dual-matrix collaboration mechanism: a cross-view spatiotemporal matrix enforcing inter-view temporal alignment, and an intra-frame influence matrix modeling regional dependencies within each frame. Furthermore, a dual-branch joint denoising U-Net is constructed to enable coordinated noise modulation during the diffusion process. Evaluated on multiple multi-view video generation benchmarks and downstream tasks, the method achieves state-of-the-art performance, significantly improving both spatiotemporal coherence and visual fidelity. Notably, it is the first approach to enable controllable, synergistic trade-offs between consistency and diversity in multi-view video synthesis.
📝 Abstract
High-quality video generation is crucial for many fields, including the film industry and autonomous driving. However, generating videos with spatiotemporal consistencies remains challenging. Current methods typically utilize attention mechanisms or modify noise to achieve consistent videos, neglecting global spatiotemporal information that could help ensure spatial and temporal consistency during video generation. In this paper, we propose the NoiseController, consisting of Multi-Level Noise Decomposition, Multi-Frame Noise Collaboration, and Joint Denoising, to enhance spatiotemporal consistencies in video generation. In multi-level noise decomposition, we first decompose initial noises into scene-level foreground/background noises, capturing distinct motion properties to model multi-view foreground/background variations. Furthermore, each scene-level noise is further decomposed into individual-level shared and residual components. The shared noise preserves consistency, while the residual component maintains diversity. In multi-frame noise collaboration, we introduce an inter-view spatiotemporal collaboration matrix and an intra-view impact collaboration matrix , which captures mutual cross-view effects and historical cross-frame impacts to enhance video quality. The joint denoising contains two parallel denoising U-Nets to remove each scene-level noise, mutually enhancing video generation. We evaluate our NoiseController on public datasets focusing on video generation and downstream tasks, demonstrating its state-of-the-art performance.