🤖 AI Summary
This work addresses the degradation of policy performance in reinforcement learning caused by hard constraints, focusing on two novel bi-objective decision-making tasks: Reach-Always-Avoid (safely reaching a goal while permanently avoiding unsafe states) and Reach-Reach (simultaneously achieving two distinct positive objectives). We propose, for the first time, a dual-value-function modeling framework grounded in Hamilton–Jacobi (HJ) equations, yielding analytically tractable Bellman-style update rules—mathematically distinct from temporal logic or Lagrangian-based approaches. Integrating HJ theory, reachability analysis, and Proximal Policy Optimization (PPO), we design DO-HJ-PPO, a principled algorithm that enforces hard constraints via geometrically rigorous value function propagation. Experiments demonstrate that DO-HJ-PPO consistently outperforms state-of-the-art baselines across safety violation rates, task success rates, and constraint satisfaction rates, producing policies with formal robustness guarantees rooted in HJ reachability theory.
📝 Abstract
Hard constraints in reinforcement learning (RL), whether imposed via the reward function or the model architecture, often degrade policy performance. Lagrangian methods offer a way to blend objectives with constraints, but often require intricate reward engineering and parameter tuning. In this work, we extend recent advances that connect Hamilton-Jacobi (HJ) equations with RL to propose two novel value functions for dual-objective satisfaction. Namely, we address: (1) the Reach-Always-Avoid problem - of achieving distinct reward and penalty thresholds - and (2) the Reach-Reach problem - of achieving thresholds of two distinct rewards. In contrast with temporal logic approaches, which typically involve representing an automaton, we derive explicit, tractable Bellman forms in this context by decomposing our problem into reach, avoid, and reach-avoid problems, as to leverage these aforementioned recent advances. From a mathematical perspective, the Reach-Always-Avoid and Reach-Reach problems are complementary and fundamentally different from standard sum-of-rewards problems and temporal logic problems, providing a new perspective on constrained decision-making. We leverage our analysis to propose a variation of Proximal Policy Optimization (DO-HJ-PPO), which solves these problems. Across a range of tasks for safe-arrival and multi-target achievement, we demonstrate that DO-HJ-PPO produces qualitatively distinct behaviors from previous approaches and out-competes a number of baselines in various metrics.