DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning

📅 2023-09-16
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address distributional shift and excessive conservatism induced by model bias, this paper proposes a mildly conservative model-based offline reinforcement learning method that does not require explicit model uncertainty estimation. The core method introduces an adaptive model sampling distribution mechanism, which dynamically weights model-generated transitions to theoretically guarantee both a lower bound on the Q-function and monotonic policy improvement. This mechanism eliminates reliance on unreliable uncertainty quantification—common in prior approaches—and mitigates redundant conservatism arising from neglecting discrepancies between the model and the offline dataset. Evaluated on the D4RL benchmark, the method consistently outperforms state-of-the-art offline RL algorithms, achieving substantial performance gains—particularly on high-difficulty tasks—while exhibiting improved training stability. Furthermore, it provides verifiable error bounds with rigorous theoretical analysis.
📝 Abstract
Model-based reinforcement learning (RL), which learns environment model from offline dataset and generates more out-of-distribution model data, has become an effective approach to the problem of distribution shift in offline RL. Due to the gap between the learned and actual environment, conservatism should be incorporated into the algorithm to balance accurate offline data and imprecise model data. The conservatism of current algorithms mostly relies on model uncertainty estimation. However, uncertainty estimation is unreliable and leads to poor performance in certain scenarios, and the previous methods ignore differences between the model data, which brings great conservatism. Therefore, this paper proposes a milDly cOnservative Model-bAsed offlINe RL algorithm (DOMAIN) without estimating model uncertainty to address the above issues. DOMAIN introduces adaptive sampling distribution of model samples, which can adaptively adjust the model data penalty. In this paper, we theoretically demonstrate that the Q value learned by the DOMAIN outside the region is a lower bound of the true Q value, the DOMAIN is less conservative than previous model-based offline RL algorithms and has the guarantee of safety policy improvement. The results of extensive experiments show that DOMAIN outperforms prior RL algorithms on the D4RL dataset benchmark.
Problem

Research questions and friction points this paper is trying to address.

Addresses distribution shift in offline RL using model-based approach
Reduces conservatism without unreliable model uncertainty estimation
Improves performance and safety policy in offline RL algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses model-based RL without uncertainty estimation
Adaptive sampling for model data penalty
Ensures safety policy improvement guarantee
🔎 Similar Papers
No similar papers found.
Xiao-Yin Liu
Xiao-Yin Liu
Institute of Automation, Chinese Academy of Sciences
RoboticsHuman-robot interactionReinforcement learningPreference learning
Xiao-Hu Zhou
Xiao-Hu Zhou
Institute of Automation, Chinese Academy of Sciences
Medical roboticsImage analysisDeep learning
Mei-Jiang Gui
Mei-Jiang Gui
Institute of Automation, Chinese Academy of Sciences
Surgical RobotTactile Perception
X
Xiaoliang Xie
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
S
Shiqi Liu
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
H
Hao Li
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
T
Tian-Yu Xiang
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
D
De-Xing Huang
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
Zeng-Guang Hou
Zeng-Guang Hou
Professor and Deputy Director, SKLMCCS, Institute of Automation, Chinese Academy of Sciences
Computational IntelligenceRoboticsMedical RobotsIntelligent Systems