🤖 AI Summary
This work addresses the trade-off between catastrophic forgetting and forward transfer in continual reinforcement learning by proposing a neuroscience-inspired world model approach. Instead of replaying experiences directly to the policy network, the method replays them to a predictive world model and introduces a dual-buffer mechanism that integrates short- and long-term memory. Task diversity is preserved through intelligent sampling from these buffers. Built upon the DreamerV3 framework and incorporating an efficient replay strategy based on distribution matching, the proposed method significantly mitigates catastrophic forgetting while maintaining strong forward transfer capabilities. It outperforms existing baselines with comparable memory overhead on standard benchmarks including Atari and Procgen CoinRun.
📝 Abstract
Continual reinforcement learning challenges agents to acquire new skills while retaining previously learned ones with the goal of improving performance in both past and future tasks. Most existing approaches rely on model-free methods with replay buffers to mitigate catastrophic forgetting; however, these solutions often face significant scalability challenges due to large memory demands. Drawing inspiration from neuroscience, where the brain replays experiences to a predictive World Model rather than directly to the policy, we present ARROW (Augmented Replay for RObust World models), a model-based continual RL algorithm that extends DreamerV3 with a memory-efficient, distribution-matching replay buffer. Unlike standard fixed-size FIFO buffers, ARROW maintains two complementary buffers: a short-term buffer for recent experiences and a long-term buffer that preserves task diversity through intelligent sampling. We evaluate ARROW on two challenging continual RL settings: Tasks without shared structure (Atari), and tasks with shared structure, where knowledge transfer is possible (Procgen CoinRun variants). Compared to model-free and model-based baselines with replay buffers of the same-size, ARROW demonstrates substantially less forgetting on tasks without shared structure, while maintaining comparable forward transfer. Our findings highlight the potential of model-based RL and bio-inspired approaches for continual reinforcement learning, warranting further research.