🤖 AI Summary
Projection-based safety filters in reinforcement learning cause action aliasing—where multiple unsafe actions map to the same safe action—distorting policy gradients, particularly in Safety-Policy RL (SP-RL). Method: We systematically compare SP-RL and Safety-Environment RL (SE-RL), proposing a unified formalism that first quantifies how action aliasing severely degrades gradient estimation in SP-RL, while SE-RL inherently mitigates it via environment-level correction. Building on this insight, we design a differentiable penalty mechanism integrated into SP-RL’s policy update to improve gradient fidelity and safety enforcement. Contribution/Results: Experiments show unmodified SP-RL significantly underperforms SE-RL. With our penalty, SP-RL achieves comparable or superior safety and convergence across multiple continuous-control benchmarks, demonstrating the viability of policy-layer safety optimization.
📝 Abstract
Projection-based safety filters, which modify unsafe actions by mapping them to the closest safe alternative, are widely used to enforce safety constraints in reinforcement learning (RL). Two integration strategies are commonly considered: Safe environment RL (SE-RL), where the safeguard is treated as part of the environment, and safe policy RL (SP-RL), where it is embedded within the policy through differentiable optimization layers. Despite their practical relevance in safety-critical settings, a formal understanding of their differences is lacking. In this work, we present a theoretical comparison of SE-RL and SP-RL. We identify a key distinction in how each approach is affected by action aliasing, a phenomenon in which multiple unsafe actions are projected to the same safe action, causing information loss in the policy gradients. In SE-RL, this effect is implicitly approximated by the critic, while in SP-RL, it manifests directly as rank-deficient Jacobians during backpropagation through the safeguard. Our contributions are threefold: (i) a unified formalization of SE-RL and SP-RL in the context of actor-critic algorithms, (ii) a theoretical analysis of their respective policy gradient estimates, highlighting the role of action aliasing, and (iii) a comparative study of mitigation strategies, including a novel penalty-based improvement for SP-RL that aligns with established SE-RL practices. Empirical results support our theoretical predictions, showing that action aliasing is more detrimental for SP-RL than for SE-RL. However, with appropriate improvement strategies, SP-RL can match or outperform improved SE-RL across a range of environments. These findings provide actionable insights for choosing and refining projection-based safe RL methods based on task characteristics.