🤖 AI Summary
Existing distributional reinforcement learning (DRL) methods typically rely on discrete categorical or finite quantile representations, limiting their ability to capture fine-grained structural properties of return distributions and hindering precise identification of high-uncertainty states for effective exploration and safe decision-making. To address this, we introduce flow-based generative modeling into DRL for the first time, proposing a flow-matching objective and a flow-derivative ordinary differential equation (ODE) that—combined with the distributional Bellman equation—constructs a continuous, differentiable, full-density path over state-level return distributions. Our approach enables fine-grained characterization of high-variance states and dynamic optimization of learning priorities. Evaluated on 37 state-space and 25 image-space benchmark tasks, it achieves a 1.3× average improvement in success rate over leading DRL baselines.
📝 Abstract
While most reinforcement learning methods today flatten the distribution of future returns to a single scalar value, distributional RL methods exploit the return distribution to provide stronger learning signals and to enable applications in exploration and safe RL. While the predominant method for estimating the return distribution is by modeling it as a categorical distribution over discrete bins or estimating a finite number of quantiles, such approaches leave unanswered questions about the fine-grained structure of the return distribution and about how to distinguish states with high return uncertainty for decision-making. The key idea in this paper is to use modern, flexible flow-based models to estimate the full future return distributions and identify those states with high return variance. We do so by formulating a new flow-matching objective that generates probability density paths satisfying the distributional Bellman equation. Building upon the learned flow models, we estimate the return uncertainty of distinct states using a new flow derivative ODE. We additionally use this uncertainty information to prioritize learning a more accurate return estimation on certain transitions. We compare our method (Value Flows) with prior methods in the offline and online-to-online settings. Experiments on $37$ state-based and $25$ image-based benchmark tasks demonstrate that Value Flows achieves a $1.3 imes$ improvement on average in success rates. Website: https://pd-perry.github.io/value-flows Code: https://github.com/chongyi-zheng/value-flows