Q-WSL: Optimizing Goal-Conditioned RL with Weighted Supervised Learning via Dynamic Programming

📅 2024-10-09
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Goal-conditioned reinforcement learning (GCRL) methods like GCWSL suffer from limited trajectory stitching capability and poor generalization to unseen goals. Method: We propose Dynamic Programming–Guided Goal-Conditioned Weighted Supervised Learning (DP-GCWSL), the first framework to integrate Q-learning’s dynamic programming principle into weighted supervised learning. It enables cross-trajectory optimal action distillation and multi-trajectory value alignment, overcoming GCWSL’s reliance on complete, high-quality demonstration trajectories. DP-GCWSL jointly optimizes goal-conditioned policies and reconstructs the replay buffer to better leverage sparse reward signals. Contribution/Results: In sparse-reward goal-reaching tasks, DP-GCWSL significantly improves sample efficiency and final performance. It demonstrates strong robustness to binary rewards and environmental stochasticity, achieving superior generalization across diverse unseen goals without requiring additional environment interactions or expert demonstrations.

Technology Category

Application Category

📝 Abstract
A novel class of advanced algorithms, termed Goal-Conditioned Weighted Supervised Learning (GCWSL), has recently emerged to tackle the challenges posed by sparse rewards in goal-conditioned reinforcement learning (RL). GCWSL consistently delivers strong performance across a diverse set of goal-reaching tasks due to its simplicity, effectiveness, and stability. However, GCWSL methods lack a crucial capability known as trajectory stitching, which is essential for learning optimal policies when faced with unseen skills during testing. This limitation becomes particularly pronounced when the replay buffer is predominantly filled with sub-optimal trajectories. In contrast, traditional TD-based RL methods, such as Q-learning, which utilize Dynamic Programming, do not face this issue but often experience instability due to the inherent difficulties in value function approximation. In this paper, we propose Q-learning Weighted Supervised Learning (Q-WSL), a novel framework designed to overcome the limitations of GCWSL by incorporating the strengths of Dynamic Programming found in Q-learning. Q-WSL leverages Dynamic Programming results to output the optimal action of (state, goal) pairs across different trajectories within the replay buffer. This approach synergizes the strengths of both Q-learning and GCWSL, effectively mitigating their respective weaknesses and enhancing overall performance. Empirical evaluations on challenging goal-reaching tasks demonstrate that Q-WSL surpasses other goal-conditioned approaches in terms of both performance and sample efficiency. Additionally, Q-WSL exhibits notable robustness in environments characterized by binary reward structures and environmental stochasticity.
Problem

Research questions and friction points this paper is trying to address.

Overcoming sparse rewards in goal-conditioned RL with GCWSL limitations
Lacking trajectory stitching in GCWSL for optimal policy learning
Combining Q-learning and GCWSL to enhance stability and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Q-learning with Weighted Supervised Learning
Uses Dynamic Programming for optimal action selection
Enhances robustness in binary reward environments
🔎 Similar Papers
No similar papers found.