DenseGRPO: From Sparse to Dense Reward for Flow Matching Model Alignment

πŸ“… 2026-01-28
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing flow-matching-based GRPO methods in text-to-image generation, which rely on sparse rewards and consequently decouple intermediate denoising steps from global feedback. To overcome this, we propose DenseGRPO, a novel approach that introduces a step-level dense reward mechanism. By modeling rewards for intermediate clean images via ordinary differential equations (ODEs) and incorporating time-step-adaptive stochasticity into stochastic differential equation (SDE) sampling, DenseGRPO precisely quantifies each denoising step’s contribution to the final human preference. Furthermore, we design a reward-aware exploration space calibration strategy to reconcile the time-varying nature of noise intensity with exploration dynamics. Experimental results demonstrate that DenseGRPO significantly improves alignment with human preferences across multiple benchmarks.

Technology Category

Application Category

πŸ“ Abstract
Recent GRPO-based approaches built on flow matching models have shown remarkable improvements in human preference alignment for text-to-image generation. Nevertheless, they still suffer from the sparse reward problem: the terminal reward of the entire denoising trajectory is applied to all intermediate steps, resulting in a mismatch between the global feedback signals and the exact fine-grained contributions at intermediate denoising steps. To address this issue, we introduce \textbf{DenseGRPO}, a novel framework that aligns human preference with dense rewards, which evaluates the fine-grained contribution of each denoising step. Specifically, our approach includes two key components: (1) we propose to predict the step-wise reward gain as dense reward of each denoising step, which applies a reward model on the intermediate clean images via an ODE-based approach. This manner ensures an alignment between feedback signals and the contributions of individual steps, facilitating effective training; and (2) based on the estimated dense rewards, a mismatch drawback between the uniform exploration setting and the time-varying noise intensity in existing GRPO-based methods is revealed, leading to an inappropriate exploration space. Thus, we propose a reward-aware scheme to calibrate the exploration space by adaptively adjusting a timestep-specific stochasticity injection in the SDE sampler, ensuring a suitable exploration space at all timesteps. Extensive experiments on multiple standard benchmarks demonstrate the effectiveness of the proposed DenseGRPO and highlight the critical role of the valid dense rewards in flow matching model alignment.
Problem

Research questions and friction points this paper is trying to address.

sparse reward
flow matching
human preference alignment
denoising trajectory
reward mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dense Reward
Flow Matching
GRPO
Preference Alignment
Stochasticity Calibration
πŸ”Ž Similar Papers
No similar papers found.
H
Haoyou Deng
National Key Laboratory of Multispectral Information Intelligent Processing Technology, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology; Tongyi Lab, Alibaba Group
K
Keyu Yan
Tongyi Lab, Alibaba Group
Chaojie Mao
Chaojie Mao
Alibaba Group
Computer Vision
Xiang Wang
Xiang Wang
University of Science and Technology of China
Trustworthy AIGraph LearningRecommendationFoundation ModelsMultimodal Models
Yu Liu
Yu Liu
Alibaba Group
self-supervised learninggenerative modeling
C
Changxin Gao
National Key Laboratory of Multispectral Information Intelligent Processing Technology, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Nong Sang
Nong Sang
Huazhong University of Science and Technology
Computer Vision and Pattern Recognition