π€ AI Summary
In safe reinforcement learning, jointly optimizing reward and safety objectives often leads to conflicts, while external safety filters rely on prior knowledge and hinder exploration. This paper proposes the modular Cost-Aware Regulator (CAR), which decouples performance and safety optimization: it applies a trainable, adaptive action scaling mechanism *post-policy* to smoothly modulate actions without overriding the original policy, thereby preserving exploration while ensuring constraint satisfaction. CAR dynamically adjusts its scaling coefficient based on predicted constraint violation severity and is compatible with off-policy algorithms such as SAC and TD3. Evaluated on sparse-cost tasks in Safety Gym, CAR achieves the best returnβcost trade-off: constraint violations are reduced by up to 126Γ, task return improves by over an order of magnitude, and training stability and safety are significantly enhanced.
π Abstract
Safe reinforcement learning (RL) seeks to mitigate unsafe behaviors that arise from exploration during training by reducing constraint violations while maintaining task performance. Existing approaches typically rely on a single policy to jointly optimize reward and safety, which can cause instability due to conflicting objectives, or they use external safety filters that override actions and require prior system knowledge. In this paper, we propose a modular cost-aware regulator that scales the agent's actions based on predicted constraint violations, preserving exploration through smooth action modulation rather than overriding the policy. The regulator is trained to minimize constraint violations while avoiding degenerate suppression of actions. Our approach integrates seamlessly with off-policy RL methods such as SAC and TD3, and achieves state-of-the-art return-to-cost ratios on Safety Gym locomotion tasks with sparse costs, reducing constraint violations by up to 126 times while increasing returns by over an order of magnitude compared to prior methods.