Puzzle it Out: Local-to-Global World Model for Offline Multi-Agent Reinforcement Learning

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline multi-agent reinforcement learning often yields overly conservative policies with limited generalization due to restricted data distributions. This work proposes the Local-to-Global (LOGO) world model, which infers global state transitions by modeling easily estimable local dynamics and generates synthetic data to augment the original dataset. Coupled with a lightweight uncertainty-aware sampling mechanism, LOGO implicitly captures inter-agent dependencies while avoiding the high computational overhead of conventional ensemble methods. The approach requires only an additional encoder to efficiently estimate uncertainty and achieves state-of-the-art performance across eight standard offline MARL benchmarks, significantly outperforming eight strong baselines. This study establishes a new paradigm for generalizable offline multi-agent learning.

Technology Category

Application Category

📝 Abstract
Offline multi-agent reinforcement learning (MARL) aims to solve cooperative decision-making problems in multi-agent systems using pre-collected datasets. Existing offline MARL methods primarily constrain training within the dataset distribution, resulting in overly conservative policies that struggle to generalize beyond the support of the data. While model-based approaches offer a promising solution by expanding the original dataset with synthetic data generated from a learned world model, the high dimensionality, non-stationarity, and complexity of multi-agent systems make it challenging to accurately estimate the transitions and reward functions in offline MARL. Given the difficulty of directly modeling joint dynamics, we propose a local-to-global (LOGO) world model, a novel framework that leverages local predictions-which are easier to estimate-to infer global state dynamics, thus improving prediction accuracy while implicitly capturing agent-wise dependencies. Using the trained world model, we generate synthetic data to augment the original dataset, expanding the effective state-action space. To ensure reliable policy learning, we further introduce an uncertainty-aware sampling mechanism that adaptively weights synthetic data by prediction uncertainty, reducing approximation error propagation to policies. In contrast to conventional ensemble-based methods, our approach requires only an additional encoder for uncertainty estimation, significantly reducing computational overhead while maintaining accuracy. Extensive experiments across 8 scenarios against 8 baselines demonstrate that our method surpasses state-of-the-art baselines on standard offline MARL benchmarks, establishing a new model-based baseline for generalizable offline multi-agent learning.
Problem

Research questions and friction points this paper is trying to address.

offline multi-agent reinforcement learning
world model
generalization
data distribution
transition dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

local-to-global world model
offline multi-agent reinforcement learning
synthetic data augmentation
uncertainty-aware sampling
model-based MARL
🔎 Similar Papers
No similar papers found.