How to Provably Improve Return Conditioned Supervised Learning?

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
RCSL has gained attention in sequential decision-making for its simplicity and stability, yet suffers from limited composability and performance bottlenecks tied to the quality of offline behavioral policies. This paper proposes Enhanced RCSL, introducing the novel concept of *in-distribution optimal residual return*: at each state, it dynamically selects the highest achievable future return attainable within the offline dataset—subject only to state realizability—without explicit reward shaping or temporal-difference learning. The method integrates supervised learning with a policy-guided return filtering mechanism and provides theoretical convergence guarantees. We prove that its induced policy sequence is monotonically superior to that of standard RCSL. Empirical evaluation across diverse offline RL benchmarks demonstrates consistent and significant performance gains, marking the first provably superior improvement within the RCSL framework.

Technology Category

Application Category

📝 Abstract
In sequential decision-making problems, Return-Conditioned Supervised Learning (RCSL) has gained increasing recognition for its simplicity and stability in modern decision-making tasks. Unlike traditional offline reinforcement learning (RL) algorithms, RCSL frames policy learning as a supervised learning problem by taking both the state and return as input. This approach eliminates the instability often associated with temporal difference (TD) learning in offline RL. However, RCSL has been criticized for lacking the stitching property, meaning its performance is inherently limited by the quality of the policy used to generate the offline dataset. To address this limitation, we propose a principled and simple framework called Reinforced RCSL. The key innovation of our framework is the introduction of a concept we call the in-distribution optimal return-to-go. This mechanism leverages our policy to identify the best achievable in-dataset future return based on the current state, avoiding the need for complex return augmentation techniques. Our theoretical analysis demonstrates that Reinforced RCSL can consistently outperform the standard RCSL approach. Empirical results further validate our claims, showing significant performance improvements across a range of benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Improving RCSL's stitching property limitation
Enhancing policy performance with offline dataset quality
Introducing in-distribution optimal return-to-go concept
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforced RCSL framework improves RCSL
In-distribution optimal return-to-go mechanism
Avoids complex return augmentation techniques
🔎 Similar Papers
No similar papers found.
Z
Zhishuai Liu
Duke University
Y
Yu Yang
Duke University
R
Ruhan Wang
Indiana University Bloomington
Pan Xu
Pan Xu
Duke University
Machine LearningOptimizationReinforcement LearningAIHealthcare
Dongruo Zhou
Dongruo Zhou
Indiana University Bloomington
Machine Learning