ACL-QL: Adaptive Conservative Level in Q-Learning for Offline Reinforcement Learning

๐Ÿ“… 2024-11-28
๐Ÿ›๏ธ IEEE Transactions on Neural Networks and Learning Systems
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In offline reinforcement learning, existing conservative methods (e.g., CQL) suffer from simultaneous over-conservatism and Q-value overestimation due to fixed-strength constraints. To address this, we propose Adaptive Conservative Q-Learning (ACQL), the first framework introducing two learnable weight functions that dynamically modulate conservatism per state-action pairโ€”amplifying Q-estimates for high-quality transitions while suppressing overestimation for low-quality ones. We theoretically derive a sufficient condition for the conservatism range and design a joint optimization mechanism combining monotonicity loss and surrogate loss. On the D4RL benchmark, ACQL achieves state-of-the-art performance, significantly outperforming BCQ, CQL, and IQL. Ablation studies confirm the efficacy of both the adaptive weighting scheme and the dual-loss design.

Technology Category

Application Category

๐Ÿ“ Abstract
Offline reinforcement learning (RL), which operates solely on static datasets without further interactions with the environment, provides an appealing alternative to learning a safe and promising control policy. The prevailing methods typically learn a conservative policy to mitigate the problem of Q-value overestimation, but it is prone to overdo it, leading to an overly conservative policy. Moreover, they optimize all samples equally with fixed constraints, lacking the nuanced ability to control conservative levels in a fine-grained manner. Consequently, this limitation results in a performance decline. To address the above two challenges in a united way, we propose a framework, adaptive conservative level in Q-learning (ACL-QL), which limits the Q-values in a mild range and enables adaptive control on the conservative level over each state-action pair, i.e., lifting the Q-values more for good transitions and less for bad transitions. We theoretically analyze the conditions under which the conservative level of the learned Q-function can be limited in a mild range and how to optimize each transition adaptively. Motivated by the theoretical analysis, we propose a novel algorithm, ACL-QL, which uses two learnable adaptive weight functions to control the conservative level over each transition. Subsequently, we design a monotonicity loss and surrogate losses to train the adaptive weight functions, Q-function, and policy network alternatively. We evaluate ACL-QL on the commonly used datasets for deep data-driven reinforcement learning (D4RL) benchmark and conduct extensive ablation studies to illustrate the effectiveness and state-of-the-art performance compared with existing offline DRL baselines.
Problem

Research questions and friction points this paper is trying to address.

Mitigates Q-value overestimation in offline RL
Enables adaptive conservative level control
Improves performance by optimizing transitions adaptively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive control of conservative levels
Learnable adaptive weight functions
Monotonicity and surrogate losses training
๐Ÿ”Ž Similar Papers
2024-05-23Trans. Mach. Learn. Res.Citations: 0