🤖 AI Summary
In multi-game offline reinforcement learning, manually specified target returns hinder generalization and practical deployment. Method: This paper proposes a training-free, data-driven approach for adaptive target return estimation. It models the reward distribution of offline datasets and integrates a dynamic target scaling mechanism with a lightweight meta-optimizer into the Decision Transformer architecture—requiring no expert priors or additional training to automatically infer game-specific target returns. Contribution/Results: Evaluated on the Atari multi-game benchmark, the method achieves a 17.3% average return improvement over baselines, significantly enhances cross-game generalization stability, and incurs zero computational overhead. To our knowledge, this is the first work to enable plug-and-play, game-agnostic target return adaptation in offline RL, establishing a novel paradigm for unsupervised multi-task offline reinforcement learning.
📝 Abstract
Achieving autonomous agents with robust generalization capabilities across diverse games and tasks remains one of the ultimate goals in AI research. Recent advancements in transformer-based offline reinforcement learning, exemplified by the MultiGame Decision Transformer [Lee et al., 2022], have shown remarkable performance across various games or tasks. However, these approaches depend heavily on human expertise, presenting substantial challenges for practical deployment, particularly in scenarios with limited prior game-specific knowledge. In this paper, we propose an algorithm called Multi-Game Target Return Optimizer (MTRO) to autonomously determine game-specific target returns within the Multi-Game Decision Transformer framework using solely offline datasets. MTRO addresses the existing limitations by automating the target return configuration process, leveraging environmental reward information extracted from offline datasets. Notably, MTRO does not require additional training, enabling seamless integration into existing Multi-Game Decision Transformer architectures. Our experimental evaluations on Atari games demonstrate that MTRO enhances the performance of RL policies across a wide array of games, underscoring its potential to advance the field of autonomous agent development.