ReFORM: Reflected Flows for On-support Offline RL via Noise Manipulation

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in offline reinforcement learning where policies are prone to out-of-distribution (OOD) errors due to deviation from the training data distribution and struggle to balance expressive power with multimodal action modeling. To tackle these issues, the paper proposes ReFORM, a novel method that leverages a flow-based policy trained via behavioral cloning from a bounded source distribution to capture the action support set. ReFORM introduces, for the first time, reflected flows to generate bounded noise, which inherently respects the support constraints while optimizing policy performance—thereby circumventing the limitations imposed by conventional statistical-distance penalties on policy improvement. Evaluated on 40 tasks from the OGBench benchmark, ReFORM achieves state-of-the-art performance across the board using a single set of hyperparameters, significantly outperforming all baseline methods that require manual tuning.

Technology Category

Application Category

📝 Abstract
Offline reinforcement learning (RL) aims to learn the optimal policy from a fixed dataset generated by behavior policies without additional environment interactions. One common challenge that arises in this setting is the out-of-distribution (OOD) error, which occurs when the policy leaves the training distribution. Prior methods penalize a statistical distance term to keep the policy close to the behavior policy, but this constrains policy improvement and may not completely prevent OOD actions. Another challenge is that the optimal policy distribution can be multimodal and difficult to represent. Recent works apply diffusion or flow policies to address this problem, but it is unclear how to avoid OOD errors while retaining policy expressiveness. We propose ReFORM, an offline RL method based on flow policies that enforces the less restrictive support constraint by construction. ReFORM learns a behavior cloning (BC) flow policy with a bounded source distribution to capture the support of the action distribution, then optimizes a reflected flow that generates bounded noise for the BC flow while keeping the support, to maximize the performance. Across 40 challenging tasks from the OGBench benchmark with datasets of varying quality and using a constant set of hyperparameters for all tasks, ReFORM dominates all baselines with hand-tuned hyperparameters on the performance profile curves.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
out-of-distribution error
policy expressiveness
multimodal policy distribution
support constraint
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reflected Flow
Support Constraint
Offline Reinforcement Learning
Flow Policy
Out-of-Distribution Error
🔎 Similar Papers
No similar papers found.