🤖 AI Summary
Offline reinforcement learning often suffers from excessive conservatism due to the distributional shift between static datasets and the learned policy, which limits performance gains. This work proposes MoReBRAC, a framework that leverages a dual-loop world model to generate high-fidelity synthetic transitions and introduces a multi-level uncertainty filtering mechanism to safely expand the training data manifold. Innovatively, a variational autoencoder (VAE) serves as a geometric anchor to guide the synthesis process, while model sensitivity analysis combined with Monte Carlo Dropout constructs a hierarchical uncertainty pipeline to ensure the reliability of synthetic data. Evaluated on the D4RL Gym-MuJoCo benchmark, MoReBRAC significantly outperforms existing methods, demonstrating particularly strong performance in settings with random and suboptimal datasets.
📝 Abstract
Offline Reinforcement Learning (ORL) holds immense promise for safety-critical domains like industrial robotics, where real-time environmental interaction is often prohibitive. A primary obstacle in ORL remains the distributional shift between the static dataset and the learned policy, which typically mandates high degrees of conservatism that can restrain potential policy improvements. We present MoReBRAC, a model-based framework that addresses this limitation through Uncertainty-Aware latent synthesis. Instead of relying solely on the fixed data, MoReBRAC utilizes a dual-recurrent world model to synthesize high-fidelity transitions that augment the training manifold. To ensure the reliability of this synthetic data, we implement a hierarchical uncertainty pipeline integrating Variational Autoencoder (VAE) manifold detection, model sensitivity analysis, and Monte Carlo (MC) dropout. This multi-layered filtering process guarantees that only transitions residing within high-confidence regions of the learned dynamics are utilized. Our results on D4RL Gym-MuJoCo benchmarks reveal significant performance gains, particularly in ``random''and ``suboptimal''data regimes. We further provide insights into the role of the VAE as a geometric anchor and discuss the distributional trade-offs encountered when learning from near-optimal datasets.