Masked Generative Priors Improve World Models Sequence Modelling Capabilities

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak sequential modeling capability, low data efficiency, and poor adaptability to continuous-control tasks in existing world models, this paper proposes GIT-STORM. Methodologically, it is the first to integrate Masked Generative Image Transformer (MaskGIT) priors into the STORM world model, yielding an architecture with enhanced temporal modeling and efficient latent-state updates. It introduces a novel state-mixing function that explicitly fuses action inputs with latent states. Furthermore, it achieves the first end-to-end reinforcement learning with a Transformer-based world model in continuous-action environments—specifically, the DeepMind Control Suite. Empirically, GIT-STORM significantly improves sample efficiency on Atari 100k, demonstrates superior policy learning in continuous-control benchmarks, and concurrently enhances video prediction fidelity and cross-task generalization.

Technology Category

Application Category

📝 Abstract
Deep Reinforcement Learning (RL) has become the leading approach for creating artificial agents in complex environments. Model-based approaches, which are RL methods with world models that predict environment dynamics, are among the most promising directions for improving data efficiency, forming a critical step toward bridging the gap between research and real-world deployment. In particular, world models enhance sample efficiency by learning in imagination, which involves training a generative sequence model of the environment in a self-supervised manner. Recently, Masked Generative Modelling has emerged as a more efficient and superior inductive bias for modelling and generating token sequences. Building on the Efficient Stochastic Transformer-based World Models (STORM) architecture, we replace the traditional MLP prior with a Masked Generative Prior (e.g., MaskGIT Prior) and introduce GIT-STORM. We evaluate our model on two downstream tasks: reinforcement learning and video prediction. GIT-STORM demonstrates substantial performance gains in RL tasks on the Atari 100k benchmark. Moreover, we apply Transformer-based World Models to continuous action environments for the first time, addressing a significant gap in prior research. To achieve this, we employ a state mixer function that integrates latent state representations with actions, enabling our model to handle continuous control tasks. We validate this approach through qualitative and quantitative analyses on the DeepMind Control Suite, showcasing the effectiveness of Transformer-based World Models in this new domain. Our results highlight the versatility and efficacy of the MaskGIT dynamics prior, paving the way for more accurate world models and effective RL policies.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RL world models with Masked Generative Priors
Improving sample efficiency in continuous action environments
Applying Transformer-based models to video prediction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Generative Prior replaces MLP prior
State mixer integrates latent states with actions
Transformer-based World Models for continuous control
🔎 Similar Papers
No similar papers found.