From Denoising to Refining: A Corrective Framework for Vision-Language Diffusion Model

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discrete diffusion models for vision-language tasks suffer from training-inference inconsistency, leading to cascading errors—such as grammatical violations and semantic hallucinations—during parallel decoding due to erroneous initial token generation. To address this, we propose a paradigm shift from “passive denoising” to “active refinement,” introducing the first two-stage self-correction training framework. Stage one enhances error detection capability via synthetic error-revision data augmentation; stage two implements an online expert-guided self-correction loop that dynamically rectifies outputs during generation. Our method integrates bidirectional context modeling, targeted error injection during training, and imitation of expert correction strategies. Experiments demonstrate substantial improvements in syntactic correctness and factual accuracy of generated text. The approach enables stable, efficient parallel generation across diverse vision-language tasks, consistently outperforming conventional denoising baselines.

Technology Category

Application Category

📝 Abstract
Discrete diffusion models have emerged as a promising direction for vision-language tasks, offering bidirectional context modeling and theoretical parallelization. However, their practical application is severely hindered by a train-inference discrepancy, which leads to catastrophic error cascades: initial token errors during parallel decoding pollute the generation context, triggering a chain reaction of compounding errors and leading to syntactic errors and semantic hallucinations. To address this fundamental challenge, we reframe the generation process from passive denoising to active refining. We introduce ReDiff, a refining-enhanced diffusion framework that teaches the model to identify and correct its own errors. Our approach features a two-stage training process: first, we instill a foundational revision capability by training the model to revise synthetic errors; second, we implement a novel online self-correction loop where the model is explicitly trained to revise its own flawed drafts by learning from an expert's corrections. This mistake-driven learning endows the model with the crucial ability to revisit and refine its already generated output, effectively breaking the error cascade. Extensive experiments demonstrate that ReDiff significantly improves the coherence and factual accuracy of generated content, enabling stable and efficient parallel generation far superior to traditional denoising methods. Our codes and models are available at https://rediff-hku.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Corrects error cascades in vision-language diffusion models
Transforms passive denoising into active self-refining process
Enables models to identify and fix their own mistakes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reframing generation from denoising to active refining
Introducing two-stage training with synthetic error revision
Implementing online self-correction loop for mistake-driven learning
🔎 Similar Papers
No similar papers found.