Improving Transformer World Models for Data-Efficient RL

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data scarcity, weak long-horizon reasoning, and insufficient exploration and generalization in the open-world survival game Craftax-classic, this paper proposes a high-sample-efficiency Transformer World Model (TWM) reinforcement learning framework. Methodologically, it introduces three key innovations: (1) a Dyna-style warmup pretraining paradigm to stabilize early world model learning; (2) a local patch-based nearest-neighbor tokenizer (Patch-NN tokenizer) for efficient, semantically grounded visual representation; and (3) a block-wise teacher-forcing mechanism to enhance temporal consistency and causal inference. The policy network adopts a hybrid CNN-RNN architecture trained dynamically on both real and imagined trajectories. Evaluated on Craftax-classic with only 1 million environment steps, our agent achieves 67.4% task reward—surpassing DreamerV3 (53.2%) and, for the first time, exceeding human-level performance (65.0%), thereby establishing a new state-of-the-art.

Technology Category

Application Category

📝 Abstract
We present an approach to model-based RL that achieves a new state of the art performance on the challenging Craftax-classic benchmark, an open-world 2D survival game that requires agents to exhibit a wide range of general abilities -- such as strong generalization, deep exploration, and long-term reasoning. With a series of careful design choices aimed at improving sample efficiency, our MBRL algorithm achieves a reward of 67.4% after only 1M environment steps, significantly outperforming DreamerV3, which achieves 53.2%, and, for the first time, exceeds human performance of 65.0%. Our method starts by constructing a SOTA model-free baseline, using a novel policy architecture that combines CNNs and RNNs. We then add three improvements to the standard MBRL setup: (a)"Dyna with warmup", which trains the policy on real and imaginary data, (b)"nearest neighbor tokenizer"on image patches, which improves the scheme to create the transformer world model (TWM) inputs, and (c)"block teacher forcing", which allows the TWM to reason jointly about the future tokens of the next timestep.
Problem

Research questions and friction points this paper is trying to address.

Limited Data
Transformer World Models
Robot Learning Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preheated Dyna Strategy
Nearest Neighbor Encoder
Chunked Teacher Forcing
🔎 Similar Papers
No similar papers found.
Antoine Dedieu
Antoine Dedieu
Senior Research Scientist at Google DeepMind
Reinforcement LearningRepresentation LearningBayesian modelingStatistical Learning
Joseph Ortiz
Joseph Ortiz
Research Scientist, Google DeepMind
Machine learningComputer VisionRobotics
X
Xinghua Lou
Google DeepMind
C
Carter Wendelken
Google DeepMind
W
Wolfgang Lehrach
Google DeepMind
J
J Swaroop Guntupalli
Google DeepMind
M
Miguel Lazaro-Gredilla
Google DeepMind
K
Kevin Patrick Murphy
Google DeepMind