Rethinking Refinement: Correcting Generative Bias without Noise Injection

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation in sample quality of generative models in high-dimensional settings due to systematic biases. To this end, the authors propose the Bidirectional Flow Refinement (BFR) framework, which achieves bias correction in a single function evaluation without injecting noise or perturbing the sampling dynamics. BFR operates by performing deterministic alignment in latent space and lightweight refinement in data space, jointly optimizing fidelity and diversity while preserving the original ODE trajectory. Experimental results demonstrate that the method significantly improves generation quality, reducing the FID on MNIST from 3.95 to 1.46—establishing a new state of the art—and yielding substantial gains on CIFAR-10 and FFHQ as well.

Technology Category

Application Category

📝 Abstract
Generative models, including diffusion and flow-based models, often exhibit systematic biases that degrade sample quality, particularly in high-dimensional settings. We revisit refinement methods and show that effective bias correction can be achieved as a post-hoc procedure, without noise injection or multi-step resampling of the sampling process. We propose a flow-matching-based \textbf{Bi-stage Flow Refinement (BFR)} framework with two refinement strategies operating at different stages: latent space alignment for approximately invertible generators and data space refinement trained with lightweight augmentations. Unlike previous refiners that perturb sampling dynamics, BFR preserves the original ODE trajectory and applies deterministic corrections to generated samples. Experiments on MNIST, CIFAR-10, and FFHQ at 256x256 resolution demonstrate consistent improvements in fidelity and coverage; notably, starting from base samples with FID 3.95, latent space refinement achieves a \textbf{state-of-the-art} FID of \textbf{1.46} on MNIST using only a single additional function evaluation (1-NFE), while maintaining sample diversity.
Problem

Research questions and friction points this paper is trying to address.

generative bias
sample quality
high-dimensional settings
diffusion models
flow-based models
Innovation

Methods, ideas, or system contributions that make the work stand out.

flow matching
bias correction
deterministic refinement
latent space alignment
generative models
🔎 Similar Papers
No similar papers found.
Xin Peng
Xin Peng
East China University of Science and Technology
Artificial IntelligenceMachine LearningComplex Process Modeling
A
Ang Gao
State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China; School of Physical Science and Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China