🤖 AI Summary
This work aims to bridge the gap between stateless video generation models and classical state-centric world model theories. To this end, it proposes a dual-pillar conceptual framework centered on state construction and dynamics modeling. In state construction, the framework integrates implicit approaches (context management) with explicit ones (latent compression); in dynamics modeling, it combines knowledge infusion with architectural reconfiguration. The study advocates shifting evaluation criteria from visual fidelity toward functional benchmarks, highlighting persistence and causality as two critical frontiers. By establishing a theoretical foundation for building general-purpose world simulators endowed with physical persistence and causal reasoning capabilities, this work advances the evolution of video generation models toward functional world models.
📝 Abstract
Large-scale video generation models have demonstrated emergent physical coherence, positioning them as potential world models. However, a gap remains between contemporary"stateless"video architectures and classic state-centric world model theories. This work bridges this gap by proposing a novel taxonomy centered on two pillars: State Construction and Dynamics Modeling. We categorize state construction into implicit paradigms (context management) and explicit paradigms (latent compression), while dynamics modeling is analyzed through knowledge integration and architectural reformulation. Furthermore, we advocate for a transition in evaluation from visual fidelity to functional benchmarks, testing physical persistence and causal reasoning. We conclude by identifying two critical frontiers: enhancing persistence via data-driven memory and compressed fidelity, and advancing causality through latent factor decoupling and reasoning-prior integration. By addressing these challenges, the field can evolve from generating visually plausible videos to building robust, general-purpose world simulators.