DPR: Diffusion Preference-based Reward for Offline Reinforcement Learning

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline preference-based reinforcement learning (PbRL), existing MLP- and Transformer-based reward models exhibit limited capacity to accurately recover reward functions from sparse, pairwise preference signals. To address this, we propose Diffusion-based Preference Reward (DPR), the first method to introduce diffusion generative modeling into PbRL—directly learning the underlying preference distribution over state-action pairs without relying on the Bradley–Terry assumption. We further design a conditional diffusion framework, C-DPR, which leverages relative preference information from trajectory pairs to guide reward estimation. Evaluated on multiple offline RL benchmarks, DPR achieves substantial improvements over state-of-the-art baselines: it yields more robust reward fitting and boosts policy performance by 12–28%. These results demonstrate the effectiveness and superiority of diffusion models in capturing fine-grained, high-fidelity preference distributions for reward learning.

Technology Category

Application Category

📝 Abstract
Offline preference-based reinforcement learning (PbRL) mitigates the need for reward definition, aligning with human preferences via preference-driven reward feedback without interacting with the environment. However, the effectiveness of preference-driven reward functions depends on the modeling ability of the learning model, which current MLP-based and Transformer-based methods may fail to adequately provide. To alleviate the failure of the reward function caused by insufficient modeling, we propose a novel preference-based reward acquisition method: Diffusion Preference-based Reward (DPR). Unlike previous methods using Bradley-Terry models for trajectory preferences, we use diffusion models to directly model preference distributions for state-action pairs, allowing rewards to be discriminatively obtained from these distributions. In addition, considering the particularity of preference data that only know the internal relationships of paired trajectories, we further propose Conditional Diffusion Preference-based Reward (C-DPR), which leverages relative preference information to enhance the construction of the diffusion model. We apply the above methods to existing offline reinforcement learning algorithms and a series of experiment results demonstrate that the diffusion-based reward acquisition approach outperforms previous MLP-based and Transformer-based methods.
Problem

Research questions and friction points this paper is trying to address.

Improves reward modeling in offline reinforcement learning
Uses diffusion models for preference-based reward acquisition
Enhances reward accuracy with conditional diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for reward acquisition
Models preference distributions for state-action pairs
Enhances diffusion model with relative preference data
🔎 Similar Papers
No similar papers found.