Dynamic-TreeRPO: Breaking the Independent Trajectory Bottleneck with Structured Sampling

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low exploration efficiency and inflexible sampling strategies in flow-matching-based text-to-image generation integrated with reinforcement learning (RL), this paper proposes Dynamic-TreeRPO. Our method constructs a tree-structured sampling trajectory enabling prefix sharing and dynamic noise scheduling; introduces a sliding-window sampling mechanism to enhance trajectory exploration efficiency; and unifies supervised fine-tuning with RL via the LayerTuning-RL framework, supporting dynamic reward weighting and ensuring exploration continuity. Additionally, we incorporate GRPO-guided optimization, constrained stochastic differential equation (SDE) sampling, and adaptive clipping boundary techniques. On HPS-v2.1, PickScore, and ImageReward benchmarks, Dynamic-TreeRPO achieves new state-of-the-art performance—improving by 4.9%, 5.91%, and 8.66%, respectively—while accelerating training by nearly 50%. The approach significantly enhances semantic consistency and visual fidelity.

Technology Category

Application Category

📝 Abstract
The integration of Reinforcement Learning (RL) into flow matching models for text-to-image (T2I) generation has driven substantial advances in generation quality. However, these gains often come at the cost of exhaustive exploration and inefficient sampling strategies due to slight variation in the sampling group. Building on this insight, we propose Dynamic-TreeRPO, which implements the sliding-window sampling strategy as a tree-structured search with dynamic noise intensities along depth. We perform GRPO-guided optimization and constrained Stochastic Differential Equation (SDE) sampling within this tree structure. By sharing prefix paths of the tree, our design effectively amortizes the computational overhead of trajectory search. With well-designed noise intensities for each tree layer, Dynamic-TreeRPO can enhance the variation of exploration without any extra computational cost. Furthermore, we seamlessly integrate Supervised Fine-Tuning (SFT) and RL paradigm within Dynamic-TreeRPO to construct our proposed LayerTuning-RL, reformulating the loss function of SFT as a dynamically weighted Progress Reward Model (PRM) rather than a separate pretraining method. By associating this weighted PRM with dynamic-adaptive clipping bounds, the disruption of exploration process in Dynamic-TreeRPO is avoided. Benefiting from the tree-structured sampling and the LayerTuning-RL paradigm, our model dynamically explores a diverse search space along effective directions. Compared to existing baselines, our approach demonstrates significant superiority in terms of semantic consistency, visual fidelity, and human preference alignment on established benchmarks, including HPS-v2.1, PickScore, and ImageReward. In particular, our model outperforms SoTA by $4.9%$, $5.91%$, and $8.66%$ on those benchmarks, respectively, while improving the training efficiency by nearly $50%$.
Problem

Research questions and friction points this paper is trying to address.

Improves text-to-image generation by breaking independent trajectory bottlenecks
Enhances exploration variation through structured tree sampling with dynamic noise
Integrates supervised fine-tuning with reinforcement learning for efficient training optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tree-structured search with dynamic noise intensities
GRPO-guided optimization and constrained SDE sampling
Dynamically weighted Progress Reward Model integration
X
Xiaolong Fu
JD.COM
L
Lichen Ma
JD.COM
Z
Zipeng Guo
JD.COM, Sun Yat-sen University
G
Gaojing Zhou
JD.COM
C
Chongxiao Wang
Group of Alibaba
S
ShiPing Dong
JD.COM, Hunan University
Shizhe Zhou
Shizhe Zhou
Hunan University
X
Ximan Liu
JD.COM
J
Jingling Fu
JD.COM
T
Tan Lit Sin
JD.COM, Tsinghua University
Y
Yu Shi
JD.COM
Z
Zhen Chen
JD.COM
Junshi Huang
Junshi Huang
Meituan
Computer VisionNLPMachine Learning
J
Jason Li
JD.COM