🤖 AI Summary
In offline reinforcement learning, distributional shift in static datasets severely limits policy generalization. Existing data augmentation (DA) methods model only unidirectional (forward) trajectory generation, failing to capture historical paths leading to high-return states and thus constraining behavioral diversity. To address this, we propose the first diffusion-based bidirectional trajectory generation framework, which anchors on critical states to jointly model forward (future) and backward (historical) trajectories, enabling coordinated expansion of the state space. Our approach employs a dual-branch conditional diffusion architecture and is end-to-end integrated with mainstream offline RL algorithms—including CQL and IQL. Evaluated on the D4RL benchmark, it significantly outperforms existing DA methods: achieving an average performance gain of 12.7% on sparse-reward tasks (e.g., AntMaze, Kitchen) and improving trajectory diversity by 3.2×.
📝 Abstract
Recent advances in offline Reinforcement Learning (RL) have proven that effective policy learning can benefit from imposing conservative constraints on pre-collected datasets. However, such static datasets often exhibit distribution bias, resulting in limited generalizability. To address this limitation, a straightforward solution is data augmentation (DA), which leverages generative models to enrich data distribution. Despite the promising results, current DA techniques focus solely on reconstructing future trajectories from given states, while ignoring the exploration of history transitions that reach them. This single-direction paradigm inevitably hinders the discovery of diverse behavior patterns, especially those leading to critical states that may have yielded high-reward outcomes. In this work, we introduce Bidirectional Trajectory Diffusion (BiTrajDiff), a novel DA framework for offline RL that models both future and history trajectories from any intermediate states. Specifically, we decompose the trajectory generation task into two independent yet complementary diffusion processes: one generating forward trajectories to predict future dynamics, and the other generating backward trajectories to trace essential history transitions.BiTrajDiff can efficiently leverage critical states as anchors to expand into potentially valuable yet underexplored regions of the state space, thereby facilitating dataset diversity. Extensive experiments on the D4RL benchmark suite demonstrate that BiTrajDiff achieves superior performance compared to other advanced DA methods across various offline RL backbones.