NoiseController: Towards Consistent Multi-view Video Generation via Noise Decomposition and Collaboration

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient spatiotemporal consistency in multi-view video generation, this paper proposes a noise decomposition and collaboration framework. Methodologically, it introduces multi-level noise disentanglement—separating scene-level noise to capture global dynamics and instance-level noise to model foreground motion discrepancies across views—and designs a dual-matrix collaboration mechanism: a cross-view spatiotemporal matrix enforcing inter-view temporal alignment, and an intra-frame influence matrix modeling regional dependencies within each frame. Furthermore, a dual-branch joint denoising U-Net is constructed to enable coordinated noise modulation during the diffusion process. Evaluated on multiple multi-view video generation benchmarks and downstream tasks, the method achieves state-of-the-art performance, significantly improving both spatiotemporal coherence and visual fidelity. Notably, it is the first approach to enable controllable, synergistic trade-offs between consistency and diversity in multi-view video synthesis.

Technology Category

Application Category

📝 Abstract
High-quality video generation is crucial for many fields, including the film industry and autonomous driving. However, generating videos with spatiotemporal consistencies remains challenging. Current methods typically utilize attention mechanisms or modify noise to achieve consistent videos, neglecting global spatiotemporal information that could help ensure spatial and temporal consistency during video generation. In this paper, we propose the NoiseController, consisting of Multi-Level Noise Decomposition, Multi-Frame Noise Collaboration, and Joint Denoising, to enhance spatiotemporal consistencies in video generation. In multi-level noise decomposition, we first decompose initial noises into scene-level foreground/background noises, capturing distinct motion properties to model multi-view foreground/background variations. Furthermore, each scene-level noise is further decomposed into individual-level shared and residual components. The shared noise preserves consistency, while the residual component maintains diversity. In multi-frame noise collaboration, we introduce an inter-view spatiotemporal collaboration matrix and an intra-view impact collaboration matrix , which captures mutual cross-view effects and historical cross-frame impacts to enhance video quality. The joint denoising contains two parallel denoising U-Nets to remove each scene-level noise, mutually enhancing video generation. We evaluate our NoiseController on public datasets focusing on video generation and downstream tasks, demonstrating its state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Enhancing spatiotemporal consistency in multi-view video generation
Decomposing noise into scene-level and individual-level components
Improving video quality via inter-view and intra-view collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes noise into scene-level foreground/background components
Uses shared/residual noise for consistency and diversity
Employs dual U-Nets for joint denoising enhancement
🔎 Similar Papers
No similar papers found.
H
Haotian Dong
Tianjin University
X
Xin Wang
The Hong Kong Polytechnic University
D
Di Lin
Tianjin University
Y
Yipeng Wu
Tianjin University
Q
Qin Chen
Tianjin University
Ruonan Liu
Ruonan Liu
Shanghai Jiao Tong University
Embodied AIVision NavigationFault Diagnosis
K
Kairui Yang
Tianjin University
P
Ping Li
The Hong Kong Polytechnic University
Q
Qing Guo
National University of Singapore