🤖 AI Summary
Existing value function estimation methods suffer from limited expressive capacity: multi-critic ensembles merely aggregate point estimates while ignoring distributional structure, whereas distributional reinforcement learning relies on discretization or quantile regression, hindering accurate modeling of complex continuous value distributions. This paper introduces FlowCritic, the first approach to incorporate flow matching into RL value estimation, establishing a generative paradigm for value distribution modeling. FlowCritic directly learns the continuous probability distribution of state-action values via a differentiable normalizing flow, enabling high-fidelity sample generation without discretization or quantile assumptions. Evaluated across multiple benchmark tasks, FlowCritic significantly improves value estimation accuracy and training stability, accelerates policy convergence, and enhances long-horizon performance.
📝 Abstract
Reliable value estimation serves as the cornerstone of reinforcement learning (RL) by evaluating long-term returns and guiding policy improvement, significantly influencing the convergence speed and final performance. Existing works improve the reliability of value function estimation via multi-critic ensembles and distributional RL, yet the former merely combines multi point estimation without capturing distributional information, whereas the latter relies on discretization or quantile regression, limiting the expressiveness of complex value distributions. Inspired by flow matching's success in generative modeling, we propose a generative paradigm for value estimation, named FlowCritic. Departing from conventional regression for deterministic value prediction, FlowCritic leverages flow matching to model value distributions and generate samples for value estimation.