Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two key challenges in online reinforcement learning (RL) fine-tuning of continuous flow-based generative models: policy collapse and high computational cost of likelihood estimation. We propose a novel framework that requires neither reward gradients nor data filtering. Our method integrates online reward-weighted importance reweighting with Wasserstein-2 distance regularization into conditional flow matching (CFM), enabling tractable upper-bound computation and guaranteeing theoretical convergence. Crucially, our framework establishes, for the first time, an equivalence to KL-regularized RL algorithms—thereby jointly optimizing policy performance and generation diversity. Extensive experiments on target image generation, image compression, and text–image alignment demonstrate substantial improvements in reward scores while preserving high-fidelity distribution coverage.

Technology Category

Application Category

📝 Abstract
Recent advancements in reinforcement learning (RL) have achieved great success in fine-tuning diffusion-based generative models. However, fine-tuning continuous flow-based generative models to align with arbitrary user-defined reward functions remains challenging, particularly due to issues such as policy collapse from overoptimization and the prohibitively high computational cost of likelihoods in continuous-time flows. In this paper, we propose an easy-to-use and theoretically sound RL fine-tuning method, which we term Online Reward-Weighted Conditional Flow Matching with Wasserstein-2 Regularization (ORW-CFM-W2). Our method integrates RL into the flow matching framework to fine-tune generative models with arbitrary reward functions, without relying on gradients of rewards or filtered datasets. By introducing an online reward-weighting mechanism, our approach guides the model to prioritize high-reward regions in the data manifold. To prevent policy collapse and maintain diversity, we incorporate Wasserstein-2 (W2) distance regularization into our method and derive a tractable upper bound for it in flow matching, effectively balancing exploration and exploitation of policy optimization. We provide theoretical analyses to demonstrate the convergence properties and induced data distributions of our method, establishing connections with traditional RL algorithms featuring Kullback-Leibler (KL) regularization and offering a more comprehensive understanding of the underlying mechanisms and learning behavior of our approach. Extensive experiments on tasks including target image generation, image compression, and text-image alignment demonstrate the effectiveness of our method, where our method achieves optimal policy convergence while allowing controllable trade-offs between reward maximization and diversity preservation.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning continuous flow-based generative models
Aligning models with arbitrary reward functions
Preventing policy collapse and maintaining diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online reward-weighting mechanism
Wasserstein-2 distance regularization
Flow matching framework integration