GoldenStart: Q-Guided Priors and Entropy Control for Distilling Flow Policies

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of flow-matching strategies in reinforcement learning, which suffer from high inference latency, inefficient online exploration, and inadequate consideration of initial noise distribution design and policy stochasticity during distillation. To overcome these issues, the authors propose GoldenStart, a novel approach that, for the first time, employs a Q-function-guided conditional variational autoencoder to construct a prior distribution concentrated in high-Q regions, thereby optimizing the initialization point for flow policy distillation. Additionally, entropy regularization is introduced to enable tunable stochasticity, allowing a flexible trade-off between exploitation and exploration. Seamlessly integrating generative modeling with the Actor-Critic framework, GoldenStart significantly outperforms state-of-the-art methods on both offline and online continuous control benchmarks, while simultaneously achieving faster inference and enhanced exploratory capabilities.

Technology Category

Application Category

📝 Abstract
Flow-matching policies hold great promise for reinforcement learning (RL) by capturing complex, multi-modal action distributions. However, their practical application is often hindered by prohibitive inference latency and ineffective online exploration. Although recent works have employed one-step distillation for fast inference, the structure of the initial noise distribution remains an overlooked factor that presents significant untapped potential. This overlooked factor, along with the challenge of controlling policy stochasticity, constitutes two critical areas for advancing distilled flow-matching policies. To overcome these limitations, we propose GoldenStart (GSFlow), a policy distillation method with Q-guided priors and explicit entropy control. Instead of initializing generation from uninformed noise, we introduce a Q-guided prior modeled by a conditional VAE. This state-conditioned prior repositions the starting points of the one-step generation process into high-Q regions, effectively providing a "golden start" that shortcuts the policy to promising actions. Furthermore, for effective online exploration, we enable our distilled actor to output a stochastic distribution instead of a deterministic point. This is governed by entropy regularization, allowing the policy to shift from pure exploitation to principled exploration. Our integrated framework demonstrates that by designing the generative startpoint and explicitly controlling policy entropy, it is possible to achieve efficient and exploratory policies, bridging the generative models and the practical actor-critic methods. We conduct extensive experiments on offline and online continuous control benchmarks, where our method significantly outperforms prior state-of-the-art approaches. Code will be available at https://github.com/ZhHe11/GSFlow-RL.
Problem

Research questions and friction points this paper is trying to address.

flow-matching policies
policy distillation
inference latency
online exploration
entropy control
Innovation

Methods, ideas, or system contributions that make the work stand out.

flow-matching
policy distillation
Q-guided prior
entropy control
conditional VAE
🔎 Similar Papers
No similar papers found.