🤖 AI Summary
Video generation in complex dynamic scenes faces a trilemma among visual fidelity, physical consistency, and controllability. This work proposes Motion Forcing, a novel framework that explicitly decouples physical reasoning from visual synthesis through a three-level hierarchical architecture—modeling motion trajectories via geometric anchors, generating scene structure with dynamic depth maps, and rendering high-fidelity appearance through texture synthesis. To reinforce learning of physical priors such as inertia, the method introduces a masked point recovery strategy. Evaluated on autonomous driving benchmarks, Motion Forcing significantly outperforms existing approaches, consistently balancing all three desiderata even in challenging scenarios involving collisions and dense agent interactions. Moreover, it demonstrates strong generalization capabilities across physical simulation and robotic tasks.
📝 Abstract
The ultimate goal of video generation is to satisfy a fundamental trilemma: achieving high visual quality, maintaining rigorous physical consistency, and enabling precise controllability. While recent models can maintain this balance in simple, isolated scenarios, we observe that this equilibrium is fragile and often breaks down as scene complexity increases (e.g., involving collisions or dense traffic). To address this, we introduce \textbf{Motion Forcing}, a framework designed to stabilize this trilemma even in complex generative tasks. Our key insight is to explicitly decouple physical reasoning from visual synthesis via a hierarchical \textbf{``Point-Shape-Appearance''} paradigm. This approach decomposes generation into verifiable stages: modeling complex dynamics as sparse geometric anchors (\textbf{Point}), expanding them into dynamic depth maps that explicitly resolve 3D geometry (\textbf{Shape}), and finally rendering high-fidelity textures (\textbf{Appearance}). Furthermore, to foster robust physical understanding, we employ a \textbf{Masked Point Recovery} strategy. By randomly masking input anchors during training and enforcing the reconstruction of complete dynamic depth, the model is compelled to move beyond passive pattern matching and learn latent physical laws (e.g., inertia) to infer missing trajectories. Extensive experiments on autonomous driving benchmarks show that Motion Forcing significantly outperforms state-of-the-art baselines, maintaining trilemma stability across complex scenes. Evaluations on physics and robotics further confirm our framework's generality.