MineWorld: a Real-Time and Open-Source Interactive World Model on Minecraft

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of real-time, open-source interactive world models for open-ended sandbox games (e.g., Minecraft). We propose the first vision–action joint modeling framework enabling low-latency human–AI co-creation. Methodologically, we introduce a novel vision–action interleaved discretization paradigm; design a parallel spatially redundant frame decoding algorithm achieving 4–7 FPS real-time generation; and employ a vision–action autoregressive Transformer with dual tokenizers to jointly model images and actions as unified discrete token sequences. Our contributions include: (1) the first open-source, real-time Minecraft world model; (2) new evaluation metrics that jointly quantify visual fidelity and action consistency; and (3) significant improvements over existing open-source diffusion-based models in generation quality, inference speed, and action responsiveness. All code and models are publicly released.

Technology Category

Application Category

📝 Abstract
World modeling is a crucial task for enabling intelligent agents to effectively interact with humans and operate in dynamic environments. In this work, we propose MineWorld, a real-time interactive world model on Minecraft, an open-ended sandbox game which has been utilized as a common testbed for world modeling. MineWorld is driven by a visual-action autoregressive Transformer, which takes paired game scenes and corresponding actions as input, and generates consequent new scenes following the actions. Specifically, by transforming visual game scenes and actions into discrete token ids with an image tokenizer and an action tokenizer correspondingly, we consist the model input with the concatenation of the two kinds of ids interleaved. The model is then trained with next token prediction to learn rich representations of game states as well as the conditions between states and actions simultaneously. In inference, we develop a novel parallel decoding algorithm that predicts the spatial redundant tokens in each frame at the same time, letting models in different scales generate $4$ to $7$ frames per second and enabling real-time interactions with game players. In evaluation, we propose new metrics to assess not only visual quality but also the action following capacity when generating new scenes, which is crucial for a world model. Our comprehensive evaluation shows the efficacy of MineWorld, outperforming SoTA open-sourced diffusion based world models significantly. The code and model have been released.
Problem

Research questions and friction points this paper is trying to address.

Develop real-time interactive world model for Minecraft
Learn game states and action conditions via token prediction
Evaluate visual quality and action-following capacity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual-action autoregressive Transformer model
Parallel decoding for real-time frame generation
Discrete tokenization of scenes and actions
Junliang Guo
Junliang Guo
Microsoft Research
Deep LearningGenerative ModelsNatural Language Processing
Y
Yang Ye
Microsoft Research
Tianyu He
Tianyu He
Microsoft Research
machine learninggenerative modelsworld models
H
Haoyu Wu
Microsoft Research
Y
Yushu Jiang
Microsoft Research
T
Tim Pearce
Microsoft Research
J
Jiang Bian
Microsoft Research