🤖 AI Summary
Existing interactive generative systems for game worlds suffer from inadequate precise action control and long-term 3D consistency, along with a lack of effective modeling of the coupling between actions and 3D geometry. This work proposes to use 6-degree-of-freedom camera poses as a unified geometric representation, modeling user inputs via Lie algebra to generate accurate camera motions. A global-pose-based spatial memory mechanism is introduced to enable efficient retrieval of historical observations and alignment with actions. By integrating a video diffusion Transformer, pose embeddings, and a large-scale game dataset annotated with trajectories, the method significantly outperforms state-of-the-art models in action controllability, visual fidelity, and long-term 3D consistency.
📝 Abstract
Recent advances in video diffusion transformers have enabled interactive gaming world models that allow users to explore generated environments over extended horizons. However, existing approaches struggle with precise action control and long-horizon 3D consistency. Most prior works treat user actions as abstract conditioning signals, overlooking the fundamental geometric coupling between actions and the 3D world, whereby actions induce relative camera motions that accumulate into a global camera pose within a 3D world. In this paper, we establish camera pose as a unifying geometric representation to jointly ground immediate action control and long-term 3D consistency. First, we define a physics-based continuous action space and represent user inputs in the Lie algebra to derive precise 6-DoF camera poses, which are injected into the generative model via a camera embedder to ensure accurate action alignment. Second, we use global camera poses as spatial indices to retrieve relevant past observations, enabling geometrically consistent revisiting of locations during long-horizon navigation. To support this research, we introduce a large-scale dataset comprising 3,000 minutes of authentic human gameplay annotated with camera trajectories and textual descriptions. Extensive experiments show that our approach substantially outperforms state-of-the-art interactive gaming world models in action controllability, long-horizon visual quality, and 3D spatial consistency.