Euphonium: Steering Video Flow Matching via Process Reward Gradient Guided Stochastic Dynamics

๐Ÿ“… 2026-02-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the inefficiency of existing reinforcement learningโ€“based video generation alignment methods, which suffer from sparse rewards and undirected exploration. To overcome this, we propose a stochastic dynamics framework guided by process reward gradients, wherein the gradient of a process reward model is embedded into the drift term of the stochastic differential equation used in flow matching, enabling dense, stepwise guidance throughout the generation process. Furthermore, we design a distillation objective that internalizes this guidance signal into the flow network, eliminating the need for the reward model during inference. Our approach unifies and generalizes existing sampling strategies by jointly leveraging latent process rewards and pixel-level outcome rewards, achieving efficient credit assignment and enhanced visual fidelity. Experiments on text-to-video generation demonstrate significantly improved alignment performance and 1.66ร— faster training convergence.

Technology Category

Application Category

๐Ÿ“ Abstract
While online Reinforcement Learning has emerged as a crucial technique for aligning flow matching models with human preferences, current approaches are hindered by inefficient exploration during training rollouts. Relying on undirected stochasticity and sparse outcome rewards, these methods struggle to discover high-reward samples, resulting in data-inefficient and slow optimization. To address these limitations, we propose Euphonium, a novel framework that steers generation via process reward gradient guided dynamics. Our key insight is to formulate the sampling process as a theoretically principled Stochastic Differential Equation that explicitly incorporates the gradient of a Process Reward Model into the flow drift. This design enables dense, step-by-step steering toward high-reward regions, advancing beyond the unguided exploration in prior works, and theoretically encompasses existing sampling methods (e.g., Flow-GRPO, DanceGRPO) as special cases. We further derive a distillation objective that internalizes the guidance signal into the flow network, eliminating inference-time dependency on the reward model. We instantiate this framework with a Dual-Reward Group Relative Policy Optimization algorithm, combining latent process rewards for efficient credit assignment with pixel-level outcome rewards for final visual fidelity. Experiments on text-to-video generation show that Euphonium achieves better alignment compared to existing methods while accelerating training convergence by 1.66x. Our code is available at https://github.com/zerzerzerz/Euphonium
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Flow Matching
Video Generation
Reward Sparsity
Exploration Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Process Reward Gradient
Flow Matching
Stochastic Differential Equation
Reward-Guided Generation
Policy Distillation
๐Ÿ”Ž Similar Papers
No similar papers found.