Controlling Underestimation Bias in Constrained Reinforcement Learning for Safe Exploration

๐Ÿ“… 2026-01-17
๐Ÿ›๏ธ International Conference on Machine Learning
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge in constrained reinforcement learning (CRL) where existing algorithms often severely violate safety constraints during training due to underestimation of cost functions, limiting their applicability in safety-critical scenarios. To mitigate this, we propose Memory-driven Intrinsic Cost Estimation (MICE), a novel approach that introduces, for the first time, a memory mechanism inspired by โ€œflashbulb memoryโ€ into CRL. MICE stores unsafe states to identify high-risk regions and constructs an enhanced cost function by integrating intrinsic and extrinsic costs. By combining pseudo-count-based risk measurement, bias-corrected cost value estimation, and trust region policy optimization, MICE guarantees theoretical convergence while providing a worst-case bound on constraint violations. Empirical results demonstrate that MICE significantly reduces constraint violations during training without compromising policy performance relative to baseline methods.

Technology Category

Application Category

๐Ÿ“ Abstract
Constrained Reinforcement Learning (CRL) aims to maximize cumulative rewards while satisfying constraints. However, existing CRL algorithms often encounter significant constraint violations during training, limiting their applicability in safety-critical scenarios. In this paper, we identify the underestimation of the cost value function as a key factor contributing to these violations. To address this issue, we propose the Memory-driven Intrinsic Cost Estimation (MICE) method, which introduces intrinsic costs to mitigate underestimation and control bias to promote safer exploration. Inspired by flashbulb memory, where humans vividly recall dangerous experiences to avoid risks, MICE constructs a memory module that stores previously explored unsafe states to identify high-cost regions. The intrinsic cost is formulated as the pseudo-count of the current state visiting these risk regions. Furthermore, we propose an extrinsic-intrinsic cost value function that incorporates intrinsic costs and adopts a bias correction strategy. Using this function, we formulate an optimization objective within the trust region, along with corresponding optimization methods. Theoretically, we provide convergence guarantees for the proposed cost value function and establish the worst-case constraint violation for the MICE update. Extensive experiments demonstrate that MICE significantly reduces constraint violations while preserving policy performance comparable to baselines.
Problem

Research questions and friction points this paper is trying to address.

Constrained Reinforcement Learning
constraint violation
safe exploration
cost underestimation
safety-critical scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constrained Reinforcement Learning
Underestimation Bias
Intrinsic Cost
Memory Module
Safe Exploration
๐Ÿ”Ž Similar Papers
No similar papers found.