🤖 AI Summary
This work addresses the degradation in sample quality of generative models in high-dimensional settings due to systematic biases. To this end, the authors propose the Bidirectional Flow Refinement (BFR) framework, which achieves bias correction in a single function evaluation without injecting noise or perturbing the sampling dynamics. BFR operates by performing deterministic alignment in latent space and lightweight refinement in data space, jointly optimizing fidelity and diversity while preserving the original ODE trajectory. Experimental results demonstrate that the method significantly improves generation quality, reducing the FID on MNIST from 3.95 to 1.46—establishing a new state of the art—and yielding substantial gains on CIFAR-10 and FFHQ as well.
📝 Abstract
Generative models, including diffusion and flow-based models, often exhibit systematic biases that degrade sample quality, particularly in high-dimensional settings. We revisit refinement methods and show that effective bias correction can be achieved as a post-hoc procedure, without noise injection or multi-step resampling of the sampling process. We propose a flow-matching-based \textbf{Bi-stage Flow Refinement (BFR)} framework with two refinement strategies operating at different stages: latent space alignment for approximately invertible generators and data space refinement trained with lightweight augmentations. Unlike previous refiners that perturb sampling dynamics, BFR preserves the original ODE trajectory and applies deterministic corrections to generated samples. Experiments on MNIST, CIFAR-10, and FFHQ at 256x256 resolution demonstrate consistent improvements in fidelity and coverage; notably, starting from base samples with FID 3.95, latent space refinement achieves a \textbf{state-of-the-art} FID of \textbf{1.46} on MNIST using only a single additional function evaluation (1-NFE), while maintaining sample diversity.