Video Generation Models Are Good Latent Reward Models

πŸ“… 2025-11-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video reward feedback learning (ReFL) operates in pixel space, resulting in high memory consumption, slow training, lack of early supervision, and poor alignment with human preferences. This work proposes latent-space ReFLβ€”the first method to directly employ a pre-trained video diffusion model as a reward model in its noise latent space, enabling end-to-end, VAE-free gradient optimization over the full denoising trajectory. By backpropagating reward signals through latent variables, our approach supports early supervision at arbitrary timesteps and jointly optimizes motion dynamics and structural coherence. It reduces GPU memory usage by 42% and accelerates training by 3.1Γ— compared to RGB-space ReFL. On multiple video generation benchmarks, latent-space ReFL consistently outperforms its pixel-space counterpart, achieving an 18.7% higher human preference win rate. This work establishes latent-space reward modeling as a more efficient and human-aligned paradigm for video generation.

Technology Category

Application Category

πŸ“ Abstract
Reward feedback learning (ReFL) has proven effective for aligning image generation with human preferences. However, its extension to video generation faces significant challenges. Existing video reward models rely on vision-language models designed for pixel-space inputs, confining ReFL optimization to near-complete denoising steps after computationally expensive VAE decoding. This pixel-space approach incurs substantial memory overhead and increased training time, and its late-stage optimization lacks early-stage supervision, refining only visual quality rather than fundamental motion dynamics and structural coherence. In this work, we show that pre-trained video generation models are naturally suited for reward modeling in the noisy latent space, as they are explicitly designed to process noisy latent representations at arbitrary timesteps and inherently preserve temporal information through their sequential modeling capabilities. Accordingly, we propose Process Reward Feedback Learning~(PRFL), a framework that conducts preference optimization entirely in latent space, enabling efficient gradient backpropagation throughout the full denoising chain without VAE decoding. Extensive experiments demonstrate that PRFL significantly improves alignment with human preferences, while achieving substantial reductions in memory consumption and training time compared to RGB ReFL.
Problem

Research questions and friction points this paper is trying to address.

Extending reward feedback learning from image to video generation faces efficiency challenges
Existing video reward models require costly pixel-space optimization after VAE decoding
Late-stage optimization lacks early supervision for motion dynamics and structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent space reward modeling with video generators
Full denoising chain optimization without VAE decoding
Process Reward Feedback Learning for preference alignment
πŸ”Ž Similar Papers
No similar papers found.
Xiaoyue Mi
Xiaoyue Mi
ICT
W
Wenqing Yu
Tencent Hunyuan
J
Jiesong Lian
Huazhong University of Science and Technology
Shibo Jie
Shibo Jie
Peking University
Computer VisionNatural Language ProcessingMultimodal Learning
Ruizhe Zhong
Ruizhe Zhong
Ph.D Candidate of Artificial Intelligence, Shanghai Jiao Tong University
AI4EDA
Zijun Liu
Zijun Liu
Tsinghua University
LLMAgentMachine TranslationAIGC
Guozhen Zhang
Guozhen Zhang
Nanjing University
Video Frame Interpolation
Z
Zixiang Zhou
Tencent Hunyuan
Zhiyong Xu
Zhiyong Xu
Tencent Hunyuan
Y
Yuan Zhou
Tencent Hunyuan
Q
Qinglin Lu
Tencent Hunyuan
F
Fan Tang
University of Chinese Academy of Sciences