π€ AI Summary
This work addresses the challenges of safe exploration and behavioral support constraints in offline-to-online reinforcement learning by proposing the SPAARS framework. SPAARS employs curriculum learning to first conduct efficient and safe exploration in a low-dimensional latent manifold and then seamlessly transfers the policy to the original action space, thereby circumventing decoder bottlenecks. The approach integrates a conditional variational autoencoder (CVAE) with a two-stage policy: theoretical analysis shows that policy gradients in the latent space reduce variance, while behavior cloning stabilizes the curriculum transition. Notably, SPAARS requires only unordered (s, a) pairs for training, eliminating the need for trajectory segmentation. A variant incorporating OPALβs temporal skill pretraining further enhances performance. Experiments demonstrate that SPAARS-SUPE achieves a normalized return of 0.825 on kitchen-mixed-v0 with a 5Γ improvement in sample efficiency, while standalone SPAARS attains scores of 92.7 and 102.9 on hopper and walker2d, respectively, significantly outperforming the IQL baseline.
π Abstract
Offline-to-online reinforcement learning (RL) offers a promising paradigm for robotics by pre-training policies on safe, offline demonstrations and fine-tuning them via online interaction. However, a fundamental challenge remains: how to safely explore online without deviating from the behavioral support of the offline data? While recent methods leverage conditional variational autoencoders (CVAEs) to bound exploration within a latent space, they inherently suffer from an exploitation gap -- a performance ceiling imposed by the decoder's reconstruction loss. We introduce SPAARS, a curriculum learning framework that initially constrains exploration to the low-dimensional latent manifold for sample-efficient, safe behavioral improvement, then seamlessly transfers control to the raw action space, bypassing the decoder bottleneck. SPAARS has two instantiations: the CVAE-based variant requires only unordered (s,a) pairs and no trajectory segmentation; SPAARS-SUPE pairs SPAARS with OPAL temporal skill pretraining for stronger exploration structure at the cost of requiring trajectory chunks. We prove an upper bound on the exploitation gap using the Performance Difference Lemma, establish that latent-space policy gradients achieve provable variance reduction over raw-space exploration, and show that concurrent behavioral cloning during the latent phase directly controls curriculum transition stability. Empirically, SPAARS-SUPE achieves 0.825 normalized return on kitchen-mixed-v0 versus 0.75 for SUPE, with 5x better sample efficiency; standalone SPAARS achieves 92.7 and 102.9 normalized return on hopper-medium-v2 and walker2d-medium-v2 respectively, surpassing IQL baselines of 66.3 and 78.3 respectively, confirming the utility of the unordered-pair CVAE instantiation.