Offline vs. Online Learning in Model-based RL: Lessons for Data Collection Strategies

📅 2025-09-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how online versus offline data collection strategies affect the generalization and task performance of world models in model-based reinforcement learning. We identify that offline training suffers from insufficient state coverage, leading to out-of-distribution states at test time and substantial performance degradation. To mitigate this, we propose two mechanisms: (1) incorporating limited online interaction—under fixed or adaptive scheduling—to recalibrate the world model; and (2) augmenting offline datasets with exploratory trajectories to improve state coverage. Systematic evaluation across 31 continuous control benchmarks reveals that purely offline agents consistently underperform online baselines; however, even minimal online interaction restores—and often exceeds—their performance. Moreover, injecting exploration data significantly enhances the robustness and generalization of offline agents. Our study establishes a scalable co-design paradigm for data acquisition and model training, effectively balancing data efficiency with world model generalization.

Technology Category

Application Category

📝 Abstract
Data collection is crucial for learning robust world models in model-based reinforcement learning. The most prevalent strategies are to actively collect trajectories by interacting with the environment during online training or training on offline datasets. At first glance, the nature of learning task-agnostic environment dynamics makes world models a good candidate for effective offline training. However, the effects of online vs. offline data on world models and thus on the resulting task performance have not been thoroughly studied in the literature. In this work, we investigate both paradigms in model-based settings, conducting experiments on 31 different environments. First, we showcase that online agents outperform their offline counterparts. We identify a key challenge behind performance degradation of offline agents: encountering Out-Of-Distribution states at test time. This issue arises because, without the self-correction mechanism in online agents, offline datasets with limited state space coverage induce a mismatch between the agent's imagination and real rollouts, compromising policy training. We demonstrate that this issue can be mitigated by allowing for additional online interactions in a fixed or adaptive schedule, restoring the performance of online training with limited interaction data. We also showcase that incorporating exploration data helps mitigate the performance degradation of offline agents. Based on our insights, we recommend adding exploration data when collecting large datasets, as current efforts predominantly focus on expert data alone.
Problem

Research questions and friction points this paper is trying to address.

Comparing online versus offline data collection strategies in model-based reinforcement learning
Investigating performance degradation caused by Out-Of-Distribution states in offline agents
Addressing state space coverage mismatch between agent imagination and real rollouts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online agents outperform offline counterparts
Mitigate OOD states with online interactions
Incorporate exploration data to improve performance
🔎 Similar Papers
No similar papers found.