AEGPO: Adaptive Entropy-Guided Policy Optimization for Diffusion Models

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a key limitation in existing reinforcement learning from human feedback (RLHF) methods—such as GRPO—for diffusion model optimization: their reliance on static, uniform sampling that overlooks the varying learning utility across prompts and denoising timesteps, as well as the dynamic nature of critical exploration opportunities. To overcome this, the paper introduces attention entropy as a dual proxy signal. Globally, changes in entropy (ΔEntropy) dynamically allocate rollout budgets; locally, entropy peaks (Entropy(t)) identify high-uncertainty timesteps to guide targeted exploration. This two-level adaptive policy optimization mechanism significantly accelerates convergence in text-to-image generation and achieves superior alignment performance compared to standard GRPO variants.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) shows promise for aligning diffusion and flow models, yet policy optimization methods such as GRPO suffer from inefficient and static sampling strategies. These methods treat all prompts and denoising steps uniformly, ignoring substantial variations in sample learning value as well as the dynamic nature of critical exploration moments. To address this issue, we conduct a detailed analysis of the internal attention dynamics during GRPO training and uncover a key insight: attention entropy can serve as a powerful dual-signal proxy. First, across different samples, the relative change in attention entropy ({\Delta}Entropy), which reflects the divergence between the current policy and the base policy, acts as a robust indicator of sample learning value. Second, during the denoising process, the peaks of absolute attention entropy (Entropy(t)), which quantify attention dispersion, effectively identify critical timesteps where high-value exploration occurs. Building on this observation, we propose Adaptive Entropy-Guided Policy Optimization (AEGPO), a novel dual-signal, dual-level adaptive optimization strategy. At the global level, AEGPO uses {\Delta}Entropy to dynamically allocate rollout budgets, prioritizing prompts with higher learning value. At the local level, it exploits the peaks of Entropy(t) to guide exploration selectively at critical high-dispersion timesteps rather than uniformly across all denoising steps. By focusing computation on the most informative samples and the most critical moments, AEGPO enables more efficient and effective policy optimization. Experiments on text-to-image generation tasks demonstrate that AEGPO significantly accelerates convergence and achieves superior alignment performance compared to standard GRPO variants.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning from Human Feedback
Policy Optimization
Diffusion Models
Sampling Strategy
Attention Entropy
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention entropy
adaptive sampling
policy optimization
diffusion models
reinforcement learning from human feedback
🔎 Similar Papers
No similar papers found.
Yuming Li
Yuming Li
Peking University
Qingyu Li
Qingyu Li
Chinese University of Hong Kong, Shenzhen
artificial intelligenceremote sensing
C
Chengyu Bai
Peking University, Beijing, China
X
Xiangyang Luo
Kling Team, Kuaishou Technology, Beijing, China
Z
Zeyue Xue
The University of Hong Kong, Hong Kong
Wenyu Qin
Wenyu Qin
Harbin Institute of Technology
Control
M
Meng Wang
Kling Team, Kuaishou Technology, Beijing, China
Y
Yikai Wang
Beijing Normal University, Beijing, China
Shanghang Zhang
Shanghang Zhang
Peking University
Embodied AIFoundation Models