Offline Trajectory Generalization for Offline Reinforcement Learning

📅 2024-04-16
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Offline reinforcement learning faces dual challenges: weak policy generalization and low-quality simulated trajectories. Existing model-based data augmentation methods are constrained by short-horizon simulation and lack mechanisms for evaluating or correcting generated data. This paper proposes OTTO, the first framework to introduce the Causal World Transformer into offline trajectory generalization. OTTO jointly models states and rewards, and designs four high-reward-oriented trajectory perturbation strategies to generate high-fidelity synthetic data. Its plug-and-play architecture integrates seamlessly with any underlying RL algorithm, enabling mixed training on original offline data and simulated trajectories without architectural modifications. Evaluated on the D4RL benchmark, OTTO achieves an average performance improvement of 12.7% over state-of-the-art methods. Notably, it demonstrates substantial gains in sparse-reward settings and out-of-distribution state generalization tasks.

Technology Category

Application Category

📝 Abstract
Offline reinforcement learning (RL) aims to learn policies from static datasets of previously collected trajectories. Existing methods for offline RL either constrain the learned policy to the support of offline data or utilize model-based virtual environments to generate simulated rollouts. However, these methods suffer from (i) poor generalization to unseen states; and (ii) trivial improvement from low-qualified rollout simulation. In this paper, we propose offline trajectory generalization through world transformers for offline reinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a. World Transformers, to predict state dynamics and the immediate reward. Then we propose four strategies to use World Transformers to generate high-rewarded trajectory simulation by perturbing the offline data. Finally, we jointly use offline data with simulated data to train an offline RL algorithm. OTTO serves as a plug-in module and can be integrated with existing offline RL methods to enhance them with better generalization capability of transformers and high-rewarded data augmentation. Conducting extensive experiments on D4RL benchmark datasets, we verify that OTTO significantly outperforms state-of-the-art offline RL methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing offline RL with long-horizon trajectory simulation
Evaluating and correcting low-quality augmented data
Improving model-free offline RL performance in sparse-reward environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble of Transformers predicts dynamics and rewards
Uncertainty-based evaluator corrects low-confidence data
Plug-in module enhances model-free offline RL
🔎 Similar Papers
No similar papers found.