🤖 AI Summary
This work addresses the challenges of model exploitation due to model errors in model-based offline reinforcement learning, as well as critical limitations of existing approaches—such as RAMBO—including severe Q-value underestimation and training instability. To overcome these issues, the authors propose ROMI, a method that employs robust value-aware model learning to predict future states near the minimal Q-value within an adjustable state uncertainty set. ROMI further introduces an implicit, differentiable adaptive weighting mechanism, enabling dynamic and value-aware bilevel optimization. This approach effectively mitigates excessive conservatism and gradient explosion, substantially enhancing out-of-distribution generalization in multi-step rollouts. Empirical evaluations on D4RL and NeoRL benchmarks demonstrate that ROMI significantly outperforms RAMBO, achieving state-of-the-art or highly competitive performance, particularly on datasets where RAMBO underperforms.
📝 Abstract
Model-based offline reinforcement learning (RL) aims to enhance offline RL with a dynamics model that facilitates policy exploration. However, \textit{model exploitation} could occur due to inevitable model errors, degrading algorithm performance. Adversarial model learning offers a theoretical framework to mitigate model exploitation by solving a maximin formulation. Within such a paradigm, RAMBO~\citep{rigter2022rambo} has emerged as a representative and most popular method that provides a practical implementation with model gradient. However, we empirically reveal that severe Q-value underestimation and gradient explosion can occur in RAMBO with only slight hyperparameter tuning, suggesting that it tends to be overly conservative and suffers from unstable model updates. To address these issues, we propose \textbf{RO}bust value-aware \textbf{M}odel learning with \textbf{I}mplicitly differentiable adaptive weighting (ROMI). Instead of updating the dynamics model with model gradient, ROMI introduces a novel robust value-aware model learning approach. This approach requires the dynamics model to predict future states with values close to the minimum Q-value within a scale-adjustable state uncertainty set, enabling controllable conservatism and stable model updates. To further improve out-of-distribution (OOD) generalization during multi-step rollouts, we propose implicitly differentiable adaptive weighting, a bi-level optimization scheme that adaptively achieves dynamics- and value-aware model learning. Empirical results on D4RL and NeoRL datasets show that ROMI significantly outperforms RAMBO and achieves competitive or superior performance compared to other state-of-the-art methods on datasets where RAMBO typically underperforms. Code is available at https://github.com/zq2r/ROMI.git.