WildWorld: A Large-Scale Dataset for Dynamic World Modeling with Actions and Explicit State toward Generative ARPG

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video world model datasets suffer from impoverished action semantics and strong coupling between actions and pixel-level changes, making it difficult to model structured, long-term consistent dynamics. To address this, this work introduces WildWorld, a large-scale action-conditioned world modeling dataset automatically collected from a realistic AAA-grade ARPG game, comprising over 100 million frames covering more than 450 semantically rich actions. WildWorld provides synchronized explicit state annotations—including character skeletons, world states, camera poses, and depth maps—enabling the first decoupling of high-level semantic actions from multimodal states in a realistic gaming environment. This facilitates state-aware generative world modeling and is accompanied by the WildBench evaluation benchmark. Experiments reveal that current models still face significant challenges in semantic action modeling and long-term state consistency, underscoring the critical role of explicit state representations.

Technology Category

Application Category

📝 Abstract
Dynamical systems theory and reinforcement learning view world evolution as latent-state dynamics driven by actions, with visual observations providing partial information about the state. Recent video world models attempt to learn this action-conditioned dynamics from data. However, existing datasets rarely match the requirement: they typically lack diverse and semantically meaningful action spaces, and actions are directly tied to visual observations rather than mediated by underlying states. As a result, actions are often entangled with pixel-level changes, making it difficult for models to learn structured world dynamics and maintain consistent evolution over long horizons. In this paper, we propose WildWorld, a large-scale action-conditioned world modeling dataset with explicit state annotations, automatically collected from a photorealistic AAA action role-playing game (Monster Hunter: Wilds). WildWorld contains over 108 million frames and features more than 450 actions, including movement, attacks, and skill casting, together with synchronized per-frame annotations of character skeletons, world states, camera poses, and depth maps. We further derive WildBench to evaluate models through Action Following and State Alignment. Extensive experiments reveal persistent challenges in modeling semantically rich actions and maintaining long-horizon state consistency, highlighting the need for state-aware video generation. The project page is https://shandaai.github.io/wildworld-project/.
Problem

Research questions and friction points this paper is trying to address.

world modeling
action-conditioned dynamics
explicit state
long-horizon consistency
semantic actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

world modeling
explicit state
action-conditioned generation
large-scale dataset
generative ARPG
🔎 Similar Papers
No similar papers found.