Manifold-Aware Exploration for Reinforcement Learning in Video Generation

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in video generation where reinforcement learning often deviates from the data manifold during exploration, leading to degraded generation quality and biased reward estimation. To mitigate this, the authors propose SAGE-GRPO, a method that treats the implicit data distribution defined by a pretrained video model as the underlying manifold. By incorporating a manifold-aware stochastic differential equation with logarithmic curvature correction, a gradient norm equalizer, and a dual trust-region mechanism based on cyclic moving anchors, SAGE-GRPO enables stable exploration within the manifold neighborhood. This approach effectively suppresses noise-induced perturbations and long-term drift, achieving consistent improvements over existing methods on HunyuanVideo1.5 across multiple visual and alignment metrics, including VQ, MQ, TA, CLIPScore, and PickScore.

Technology Category

Application Category

📝 Abstract
Group Relative Policy Optimization (GRPO) methods for video generation like FlowGRPO remain far less reliable than their counterparts for language models and images. This gap arises because video generation has a complex solution space, and the ODE-to-SDE conversion used for exploration can inject excess noise, lowering rollout quality and making reward estimates less reliable, which destabilizes post-training alignment. To address this problem, we view the pre-trained model as defining a valid video data manifold and formulate the core problem as constraining exploration within the vicinity of this manifold, ensuring that rollout quality is preserved and reward estimates remain reliable. We propose SAGE-GRPO (Stable Alignment via Exploration), which applies constraints at both micro and macro levels. At the micro level, we derive a precise manifold-aware SDE with a logarithmic curvature correction and introduce a gradient norm equalizer to stabilize sampling and updates across timesteps. At the macro level, we use a dual trust region with a periodic moving anchor and stepwise constraints so that the trust region tracks checkpoints that are closer to the manifold and limits long-horizon drift. We evaluate SAGE-GRPO on HunyuanVideo1.5 using the original VideoAlign as the reward model and observe consistent gains over previous methods in VQ, MQ, TA, and visual metrics (CLIPScore, PickScore), demonstrating superior performance in both reward maximization and overall video quality. The code and visual gallery are available at https://dungeonmassster.github.io/SAGE-GRPO-Page/.
Problem

Research questions and friction points this paper is trying to address.

manifold-aware exploration
video generation
reinforcement learning
SDE noise
post-training alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

manifold-aware exploration
SAGE-GRPO
dual trust region
logarithmic curvature correction
gradient norm equalizer