Target Return Optimizer for Multi-Game Decision Transformer

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-game offline reinforcement learning, manually specified target returns hinder generalization and practical deployment. Method: This paper proposes a training-free, data-driven approach for adaptive target return estimation. It models the reward distribution of offline datasets and integrates a dynamic target scaling mechanism with a lightweight meta-optimizer into the Decision Transformer architecture—requiring no expert priors or additional training to automatically infer game-specific target returns. Contribution/Results: Evaluated on the Atari multi-game benchmark, the method achieves a 17.3% average return improvement over baselines, significantly enhances cross-game generalization stability, and incurs zero computational overhead. To our knowledge, this is the first work to enable plug-and-play, game-agnostic target return adaptation in offline RL, establishing a novel paradigm for unsupervised multi-task offline reinforcement learning.

Technology Category

Application Category

📝 Abstract
Achieving autonomous agents with robust generalization capabilities across diverse games and tasks remains one of the ultimate goals in AI research. Recent advancements in transformer-based offline reinforcement learning, exemplified by the MultiGame Decision Transformer [Lee et al., 2022], have shown remarkable performance across various games or tasks. However, these approaches depend heavily on human expertise, presenting substantial challenges for practical deployment, particularly in scenarios with limited prior game-specific knowledge. In this paper, we propose an algorithm called Multi-Game Target Return Optimizer (MTRO) to autonomously determine game-specific target returns within the Multi-Game Decision Transformer framework using solely offline datasets. MTRO addresses the existing limitations by automating the target return configuration process, leveraging environmental reward information extracted from offline datasets. Notably, MTRO does not require additional training, enabling seamless integration into existing Multi-Game Decision Transformer architectures. Our experimental evaluations on Atari games demonstrate that MTRO enhances the performance of RL policies across a wide array of games, underscoring its potential to advance the field of autonomous agent development.
Problem

Research questions and friction points this paper is trying to address.

Autonomous target return configuration for multi-game RL.
Reducing reliance on human expertise in offline RL.
Enhancing generalization across diverse games without retraining.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automates target return configuration using offline datasets
Integrates seamlessly without additional training requirements
Enhances RL policy performance across diverse games
🔎 Similar Papers
No similar papers found.