🤖 AI Summary
This work addresses the absence of real-time, open-source interactive world models for open-ended sandbox games (e.g., Minecraft). We propose the first vision–action joint modeling framework enabling low-latency human–AI co-creation. Methodologically, we introduce a novel vision–action interleaved discretization paradigm; design a parallel spatially redundant frame decoding algorithm achieving 4–7 FPS real-time generation; and employ a vision–action autoregressive Transformer with dual tokenizers to jointly model images and actions as unified discrete token sequences. Our contributions include: (1) the first open-source, real-time Minecraft world model; (2) new evaluation metrics that jointly quantify visual fidelity and action consistency; and (3) significant improvements over existing open-source diffusion-based models in generation quality, inference speed, and action responsiveness. All code and models are publicly released.
📝 Abstract
World modeling is a crucial task for enabling intelligent agents to effectively interact with humans and operate in dynamic environments. In this work, we propose MineWorld, a real-time interactive world model on Minecraft, an open-ended sandbox game which has been utilized as a common testbed for world modeling. MineWorld is driven by a visual-action autoregressive Transformer, which takes paired game scenes and corresponding actions as input, and generates consequent new scenes following the actions. Specifically, by transforming visual game scenes and actions into discrete token ids with an image tokenizer and an action tokenizer correspondingly, we consist the model input with the concatenation of the two kinds of ids interleaved. The model is then trained with next token prediction to learn rich representations of game states as well as the conditions between states and actions simultaneously. In inference, we develop a novel parallel decoding algorithm that predicts the spatial redundant tokens in each frame at the same time, letting models in different scales generate $4$ to $7$ frames per second and enabling real-time interactions with game players. In evaluation, we propose new metrics to assess not only visual quality but also the action following capacity when generating new scenes, which is crucial for a world model. Our comprehensive evaluation shows the efficacy of MineWorld, outperforming SoTA open-sourced diffusion based world models significantly. The code and model have been released.