Rectified Noise: A Generative Model Using Positive-incentive Noise

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generation quality of pre-trained Rectified Flow (RF) models, this paper proposes Rectified Noise (RN), a lightweight inference-time enhancement that injects positively激励 noise (pi-noise) into the RF velocity field. RN introduces only 0.39% additional parameters and requires no model retraining. By unifying the probability flow ODE and reverse SDE frameworks, it dynamically injects pi-noise during sampling to improve both diversity and fidelity. Its core innovation lies in the first integration of a positive excitation mechanism directly into the RF velocity field design—enabling low-overhead, architecture-agnostic model upgrading. On ImageNet-1k, RN reduces the FID from 10.16 to 9.05. Extensive experiments across multiple architectures (e.g., DiT, SD) and datasets (e.g., CIFAR-10, CelebA-HQ) confirm its strong generalizability and consistent superiority over baseline RF methods.

Technology Category

Application Category

📝 Abstract
Rectified Flow (RF) has been widely used as an effective generative model. Although RF is primarily based on probability flow Ordinary Differential Equations (ODE), recent studies have shown that injecting noise through reverse-time Stochastic Differential Equations (SDE) for sampling can achieve superior generative performance. Inspired by Positive-incentive Noise (pi-noise), we propose an innovative generative algorithm to train pi-noise generators, namely Rectified Noise (RN), which improves the generative performance by injecting pi-noise into the velocity field of pre-trained RF models. After introducing the Rectified Noise pipeline, pre-trained RF models can be efficiently transformed into pi-noise generators. We validate Rectified Noise by conducting extensive experiments across various model architectures on different datasets. Notably, we find that: (1) RF models using Rectified Noise reduce FID from 10.16 to 9.05 on ImageNet-1k. (2) The models of pi-noise generators achieve improved performance with only 0.39% additional training parameters.
Problem

Research questions and friction points this paper is trying to address.

Improving generative model performance via noise injection
Enhancing pre-trained Rectified Flow models with positive-incentive noise
Reducing FID scores while minimizing additional training parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Injecting Positive-incentive Noise into velocity field
Transforming pre-trained RF models into pi-noise generators
Achieving improved performance with minimal additional parameters
🔎 Similar Papers
No similar papers found.
Zhenyu Gu
Zhenyu Gu
AMD
high performance computingdeep learningEDA
Y
Yanchen Xu
Institute of Artificial Intelligence (TeleAI), China Telecom
Sida Huang
Sida Huang
Institute of Artificial Intelligence (TeleAI), China Telecom
Y
Yubin Guo
Institute of Artificial Intelligence (TeleAI), China Telecom
H
Hongyuan Zhang
Institute of Artificial Intelligence (TeleAI), China Telecom