🤖 AI Summary
Existing reward-based fine-tuning of generative models lacks theoretical foundations—particularly within flow matching and denoising diffusion frameworks—struggling to jointly ensure accuracy, sample diversity, and generalization to unseen human preferences.
Method: This work formally recasts reward-driven generation as a stochastic optimal control (SOC) problem. We rigorously prove that memoryless noise scheduling is necessary to decouple noise from samples, thereby enabling tractable optimization. Building on this insight, we propose Adjoint Matching: a novel algorithm that transforms the SOC formulation into a supervised regression task via adjoint-state methods, unifying stochastic optimal control, adjoint calculus, flow matching, and diffusion modeling.
Results: Experiments demonstrate that our approach significantly outperforms state-of-the-art methods in fidelity, perceptual realism, and generalization to unseen reward models, while preserving high sample diversity.
📝 Abstract
Dynamical generative models that produce samples through an iterative process, such as Flow Matching and denoising diffusion models, have seen widespread use, but there have not been many theoretically-sound methods for improving these models with reward fine-tuning. In this work, we cast reward fine-tuning as stochastic optimal control (SOC). Critically, we prove that a very specific memoryless noise schedule must be enforced during fine-tuning, in order to account for the dependency between the noise variable and the generated samples. We also propose a new algorithm named Adjoint Matching which outperforms existing SOC algorithms, by casting SOC problems as a regression problem. We find that our approach significantly improves over existing methods for reward fine-tuning, achieving better consistency, realism, and generalization to unseen human preference reward models, while retaining sample diversity.