Epigraph-Guided Flow Matching for Safe and Performant Offline Reinforcement Learning

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in offline reinforcement learning of simultaneously ensuring safety and maintaining performance, a trade-off often exacerbated by existing approaches that rely on soft constraints, excessive conservatism, or decoupled optimization objectives, leading to either safety violations or degraded returns. The authors formulate the problem as an optimal control task with state constraints and propose an epigraph-guided policy synthesis framework that jointly optimizes safety and reward through epigraph reformulation, thereby avoiding objective decoupling or post-hoc filtering. Their method introduces a feasibility-aware value function to reweight the behavior distribution and integrates flow matching to fit a generative policy, enabling intrinsic coordination between safety and performance. Evaluated on safety-critical benchmarks such as Safety-Gymnasium, the approach achieves near-zero safety violations while preserving competitive returns, demonstrating its effectiveness.

Technology Category

Application Category

📝 Abstract
Offline reinforcement learning (RL) provides a compelling paradigm for training autonomous systems without the risks of online exploration, particularly in safety-critical domains. However, jointly achieving strong safety and performance from fixed datasets remains challenging. Existing safe offline RL methods often rely on soft constraints that allow violations, introduce excessive conservatism, or struggle to balance safety, reward optimization, and adherence to the data distribution. To address this, we propose Epigraph-Guided Flow Matching (EpiFlow), a framework that formulates safe offline RL as a state-constrained optimal control problem to co-optimize safety and performance. We learn a feasibility value function derived from an epigraph reformulation of the optimal control problem, thereby avoiding the decoupled objectives or post-hoc filtering common in prior work. Policies are synthesized by reweighting the behavior distribution based on this epigraph value function and fitting a generative policy via flow matching, enabling efficient, distribution-consistent sampling. Across various safety-critical tasks, including Safety-Gymnasium benchmarks, EpiFlow achieves competitive returns with near-zero empirical safety violations, demonstrating the effectiveness of epigraph-guided policy synthesis.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
safety
performance
state-constrained control
data distribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Epigraph reformulation
Flow matching
Safe offline reinforcement learning
State-constrained optimal control
Feasibility value function
🔎 Similar Papers
No similar papers found.