🤖 AI Summary
This work addresses the performance degradation and poor sample efficiency of traditional temporal difference (TD) learning under high update frequencies or non-stationary targets, which stem from loss of feature plasticity. The authors propose integrating flow matching into TD learning by densely supervising the velocity field along integration paths and employing an integral readout mechanism to enable error recovery at test time. A key insight is that the effectiveness of flow matching arises not from distribution modeling per se, but from the test-time recovery capability conferred by the integration mechanism and the feature plasticity induced by multi-interpolation-point supervision. In high-update-to-data (UTD) online reinforcement learning settings, the method achieves approximately a 2× improvement in critic final performance and a 5× gain in sample efficiency, while maintaining training stability.
📝 Abstract
Recent work shows that flow matching can be effective for scalar Q-value function estimation in reinforcement learning (RL), but it remains unclear why or how this approach differs from standard critics. Contrary to conventional belief, we show that their success is not explained by distributional RL, as explicitly modeling return distributions can reduce performance. Instead, we argue that the use of integration for reading out values and dense velocity supervision at each step of this integration process for training improves TD learning via two mechanisms. First, it enables robust value prediction through \emph{test-time recovery}, whereby iterative computation through integration dampens errors in early value estimates as more integration steps are performed. This recovery mechanism is absent in monolithic critics. Second, supervising the velocity field at multiple interpolant values induces more \emph{plastic} feature learning within the network, allowing critics to represent non-stationary TD targets without discarding previously learned features or overfitting to individual TD targets encountered during training. We formalize these effects and validate them empirically, showing that flow-matching critics substantially outperform monolithic critics (2$\times$ in final performance and around 5$\times$ in sample efficiency) in settings where loss of plasticity poses a challenge e.g., in high-UTD online RL problems, while remaining stable during learning.