Q-learning with Adjoint Matching

📅 2026-01-20
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently and stably optimizing highly expressive diffusion- or flow-matching-based policies in continuous-action reinforcement learning, where leveraging Q-function gradients often leads to numerical instability. To this end, the authors propose the QAM algorithm, which introduces adjoint matching—a novel technique in reinforcement learning—that integrates temporal difference backups with a transformed Q-function gradient. This formulation yields a layer-wise objective that eliminates the need for backpropagating through multi-step denoising processes, enabling unbiased and stable policy optimization. Evaluated on offline and offline-to-online sparse-reward tasks, QAM substantially outperforms existing methods, effectively balancing high policy expressiveness with training stability.

Technology Category

Application Category

📝 Abstract
We propose Q-learning with Adjoint Matching (QAM), a novel TD-based reinforcement learning (RL) algorithm that tackles a long-standing challenge in continuous-action RL: efficient optimization of an expressive diffusion or flow-matching policy with respect to a parameterized Q-function. Effective optimization requires exploiting the first-order information of the critic, but it is challenging to do so for flow or diffusion policies because direct gradient-based optimization via backpropagation through their multi-step denoising process is numerically unstable. Existing methods work around this either by only using the value and discarding the gradient information, or by relying on approximations that sacrifice policy expressivity or bias the learned policy. QAM sidesteps both of these challenges by leveraging adjoint matching, a recently proposed technique in generative modeling, which transforms the critic's action gradient to form a step-wise objective function that is free from unstable backpropagation, while providing an unbiased, expressive policy at the optimum. Combined with temporal-difference backup for critic learning, QAM consistently outperforms prior approaches on hard, sparse reward tasks in both offline and offline-to-online RL.
Problem

Research questions and friction points this paper is trying to address.

continuous-action reinforcement learning
diffusion policy
flow-matching policy
gradient-based optimization
numerical instability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Q-learning
Adjoint Matching
Diffusion Policy
Continuous Action RL
Temporal Difference Learning
🔎 Similar Papers
No similar papers found.