Safe But Not Sorry: Reducing Over-Conservatism in Safety Critics via Uncertainty-Aware Modulation

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning agents face a fundamental trade-off between safety-aware exploration and task performance in real-world deployment: stringent safety constraints degrade performance, while reward-driven policies risk safety violations, leading to gradient degradation and policy stagnation. To address this, we propose an uncertainty-aware Safety Critic mechanism that models state-action uncertainty to dynamically modulate conservatism. Additionally, we introduce a region-adaptive cost gradient refinement strategy during Critic training, ensuring conservative behavior near safety boundaries while enabling efficient optimization within safe regions. Experiments demonstrate a 40% reduction in safety violations, an 83% decrease in the error between predicted and true cost gradients, and maintained or improved task returns. Our approach transcends the rigid safety–performance trade-off inherent in conventional methods, establishing a new paradigm for reliable RL deployment.

Technology Category

Application Category

📝 Abstract
Ensuring the safe exploration of reinforcement learning (RL) agents is critical for deployment in real-world systems. Yet existing approaches struggle to strike the right balance: methods that tightly enforce safety often cripple task performance, while those that prioritize reward leave safety constraints frequently violated, producing diffuse cost landscapes that flatten gradients and stall policy improvement. We introduce the Uncertain Safety Critic (USC), a novel approach that integrates uncertainty-aware modulation and refinement into critic training. By concentrating conservatism in uncertain and costly regions while preserving sharp gradients in safe areas, USC enables policies to achieve effective reward-safety trade-offs. Extensive experiments show that USC reduces safety violations by approximately 40% while maintaining competitive or higher rewards, and reduces the error between predicted and true cost gradients by approximately 83%, breaking the prevailing trade-off between safety and performance and paving the way for scalable safe RL.
Problem

Research questions and friction points this paper is trying to address.

Balancing safety constraints with task performance in reinforcement learning
Reducing over-conservatism in safety critics through uncertainty-aware modulation
Improving gradient sharpness in safe areas while concentrating conservatism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates uncertainty-aware modulation into critic training
Concentrates conservatism in uncertain costly regions
Preserves sharp gradients in safe policy areas
🔎 Similar Papers