๐ค AI Summary
This work addresses the challenge in constrained reinforcement learning (CRL) where existing algorithms often severely violate safety constraints during training due to underestimation of cost functions, limiting their applicability in safety-critical scenarios. To mitigate this, we propose Memory-driven Intrinsic Cost Estimation (MICE), a novel approach that introduces, for the first time, a memory mechanism inspired by โflashbulb memoryโ into CRL. MICE stores unsafe states to identify high-risk regions and constructs an enhanced cost function by integrating intrinsic and extrinsic costs. By combining pseudo-count-based risk measurement, bias-corrected cost value estimation, and trust region policy optimization, MICE guarantees theoretical convergence while providing a worst-case bound on constraint violations. Empirical results demonstrate that MICE significantly reduces constraint violations during training without compromising policy performance relative to baseline methods.
๐ Abstract
Constrained Reinforcement Learning (CRL) aims to maximize cumulative rewards while satisfying constraints. However, existing CRL algorithms often encounter significant constraint violations during training, limiting their applicability in safety-critical scenarios. In this paper, we identify the underestimation of the cost value function as a key factor contributing to these violations. To address this issue, we propose the Memory-driven Intrinsic Cost Estimation (MICE) method, which introduces intrinsic costs to mitigate underestimation and control bias to promote safer exploration. Inspired by flashbulb memory, where humans vividly recall dangerous experiences to avoid risks, MICE constructs a memory module that stores previously explored unsafe states to identify high-cost regions. The intrinsic cost is formulated as the pseudo-count of the current state visiting these risk regions. Furthermore, we propose an extrinsic-intrinsic cost value function that incorporates intrinsic costs and adopts a bias correction strategy. Using this function, we formulate an optimization objective within the trust region, along with corresponding optimization methods. Theoretically, we provide convergence guarantees for the proposed cost value function and establish the worst-case constraint violation for the MICE update. Extensive experiments demonstrate that MICE significantly reduces constraint violations while preserving policy performance comparable to baselines.