π€ AI Summary
Prompt-DT suffers from weak few-shot prompting capability and poor task discrimination in offline reinforcement learning, primarily due to data scarcity, high acquisition cost, or safety constraints. To address these challenges, this work introduces pre-trained language models (LLMs) into the Decision Transformer framework for the first time, proposing a tripartite synergistic mechanism: LLM-based initialization, LoRA-based fine-tuning, and prompt regularization. This design significantly enhances the modelβs task awareness and generalization ability under limited prompt supervision. On the MuJoCo benchmark, our method achieves comparable performance to Prompt-DT trained on full prompt data while using only 10% of the prompts. Ablation studies confirm the necessity and effectiveness of each component. Overall, this work establishes a novel paradigm for prompt-based offline RL in low-data, high-safety-critical settings.
π Abstract
Decision Transformer (DT) has emerged as a promising class of algorithms in offline reinforcement learning (RL) tasks, leveraging pre-collected datasets and Transformer's capability to model long sequences. Recent works have demonstrated that using parts of trajectories from training tasks as prompts in DT enhances its performance on unseen tasks, giving rise to Prompt-DT methods. However, collecting data from specific environments can be both costly and unsafe in many scenarios, leading to suboptimal performance and limited few-shot prompt abilities due to the data-hungry nature of Transformer-based models. Additionally, the limited datasets used in pre-training make it challenging for Prompt-DT type of methods to distinguish between various RL tasks through prompts alone. To address these challenges, we introduce the Language model-initialized Prompt Decision Transformer (LPDT) framework, which leverages pretrained language models providing rich prior knowledge for RL tasks and fine-tunes the sequence model using Low-rank Adaptation (LoRA) for meta-RL problems. We further incorporate prompt regularization to effectively differentiate between tasks based on prompt feature representations. Comprehensive empirical studies demonstrate that initializing with a pre-trained language model provides the prior knowledge and achieves a similar performance with Prompt-DT under only $10%$ data in some MuJoCo control tasks. We also provide a thorough ablation study to validate the effectiveness of each component, including sequence modeling, language models, prompt regularizations, and prompt strategies.