🤖 AI Summary
Transformer-based policies in model-free online reinforcement learning suffer from training instability, slow convergence, and heavy reliance on large replay buffers. To address these challenges, this work proposes a two-stage training framework: first, stabilizing Transformer policy initialization via behavior cloning using a foundational model (e.g., a pretrained policy network); second, switching to fully online, interactive RL fine-tuning. Crucially, the foundational model serves as a “training accelerator,” mitigating optimization difficulties inherent to Transformers in online settings. Experiments on ManiSkill (vision-based, POMDP) and MuJoCo (state-based, MDP) benchmarks demonstrate that our approach doubles training speed in visual domains, reduces replay buffer requirements to only 10–20k transitions, and significantly lowers computational overhead—while preserving generalization capability and training stability.
📝 Abstract
The appearance of transformer-based models in Reinforcement Learning (RL) has expanded the horizons of possibilities in robotics tasks, but it has simultaneously brought a wide range of challenges during its implementation, especially in model-free online RL. Some of the existing learning algorithms cannot be easily implemented with transformer-based models due to the instability of the latter. In this paper, we propose a method that uses the Accelerator policy as a transformer's trainer. The Accelerator, a simpler and more stable model, interacts with the environment independently while simultaneously training the transformer through behavior cloning during the first stage of the proposed algorithm. In the second stage, the pretrained transformer starts to interact with the environment in a fully online setting. As a result, this model-free algorithm accelerates the transformer in terms of its performance and helps it to train online in a more stable and faster way. By conducting experiments on both state-based and image-based ManiSkill environments, as well as on MuJoCo tasks in MDP and POMDP settings, we show that applying our algorithm not only enables stable training of transformers but also reduces training time on image-based environments by up to a factor of two. Moreover, it decreases the required replay buffer size in off-policy methods to 10-20 thousand, which significantly lowers the overall computational demands.