FlowCritic: Bridging Value Estimation with Flow Matching in Reinforcement Learning

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing value function estimation methods suffer from limited expressive capacity: multi-critic ensembles merely aggregate point estimates while ignoring distributional structure, whereas distributional reinforcement learning relies on discretization or quantile regression, hindering accurate modeling of complex continuous value distributions. This paper introduces FlowCritic, the first approach to incorporate flow matching into RL value estimation, establishing a generative paradigm for value distribution modeling. FlowCritic directly learns the continuous probability distribution of state-action values via a differentiable normalizing flow, enabling high-fidelity sample generation without discretization or quantile assumptions. Evaluated across multiple benchmark tasks, FlowCritic significantly improves value estimation accuracy and training stability, accelerates policy convergence, and enhances long-horizon performance.

Technology Category

Application Category

📝 Abstract
Reliable value estimation serves as the cornerstone of reinforcement learning (RL) by evaluating long-term returns and guiding policy improvement, significantly influencing the convergence speed and final performance. Existing works improve the reliability of value function estimation via multi-critic ensembles and distributional RL, yet the former merely combines multi point estimation without capturing distributional information, whereas the latter relies on discretization or quantile regression, limiting the expressiveness of complex value distributions. Inspired by flow matching's success in generative modeling, we propose a generative paradigm for value estimation, named FlowCritic. Departing from conventional regression for deterministic value prediction, FlowCritic leverages flow matching to model value distributions and generate samples for value estimation.
Problem

Research questions and friction points this paper is trying to address.

Improving reliability of value function estimation in reinforcement learning
Overcoming limitations of multi-critic ensembles and distributional RL methods
Modeling complex value distributions using flow matching techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flow matching models value distributions in RL
Generative paradigm replaces deterministic value prediction
FlowCritic generates samples for value estimation
S
Shan Zhong
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
S
Shutong Ding
School of Information Science and Technology, ShanghaiTech University
H
He Diao
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Xiangyu Wang
Xiangyu Wang
Professor, Curtin University
Civil EngineeringBuilding Information ModelingSmart CityAutomation and RoboticsSmart
K
Kah Chan Teh
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Bei Peng
Bei Peng
Lecturer (Assistant Professor), University of Sheffield
Machine LearningReinforcement LearningInteractive LearningMulti-Agent Systems