DreamWorld: Unified World Modeling in Video Generation

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video generation models struggle to jointly model multidimensional, heterogeneous world knowledge—such as physical commonsense, 3D structure, and temporal consistency—resulting in outputs that lack global coherence. To address this limitation, this work proposes DreamWorld, a novel framework that introduces a unified world modeling paradigm by simultaneously predicting both pixels and foundational model features during video generation, thereby enabling a coherent representation of spatiotemporal dynamics, spatial geometry, and semantic consistency. The approach incorporates Consistency-Constrained Annealing (CCA) and a multi-source intrinsic guidance mechanism to effectively mitigate visual instability arising from multi-objective optimization. Experimental results demonstrate that DreamWorld outperforms Wan2.1 by 2.26 points on VBench, significantly enhancing world consistency in generated videos. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Despite impressive progress in video generation, existing models remain limited to surface-level plausibility, lacking a coherent and unified understanding of the world. Prior approaches typically incorporate only a single form of world-related knowledge or rely on rigid alignment strategies to introduce additional knowledge. However, aligning the single world knowledge is insufficient to constitute a world model that requires jointly modeling multiple heterogeneous dimensions (e.g., physical commonsense, 3D and temporal consistency). To address this limitation, we introduce \textbf{DreamWorld}, a unified framework that integrates complementary world knowledge into video generators via a \textbf{Joint World Modeling Paradigm}, jointly predicting video pixels and features from foundation models to capture temporal dynamics, spatial geometry, and semantic consistency. However, naively optimizing these heterogeneous objectives can lead to visual instability and temporal flickering. To mitigate this issue, we propose \textit{Consistent Constraint Annealing (CCA)} to progressively regulate world-level constraints during training, and \textit{Multi-Source Inner-Guidance} to enforce learned world priors at inference. Extensive evaluations show that DreamWorld improves world consistency, outperforming Wan2.1 by 2.26 points on VBench. Code will be made publicly available at \href{https://github.com/ABU121111/DreamWorld}{\textcolor{mypink}{\textbf{Github}}}.
Problem

Research questions and friction points this paper is trying to address.

world modeling
video generation
temporal consistency
3D consistency
physical commonsense
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint World Modeling
Consistent Constraint Annealing
Multi-Source Inner-Guidance
Video Generation
World Consistency
🔎 Similar Papers
No similar papers found.