π€ AI Summary
Existing unified multimodal models struggle to emulate the human-like capacity for iterative reasoning and progressive refinement over intermediate visual states during drawing. This work proposes a process-driven image generation paradigm that models synthesis as a multi-round, interleaved reasoning loop of βtext planning β visual sketching β text reflection β visual refinement.β It introduces, for the first time, a multi-step inference mechanism with bidirectional constraints between textual instructions and visual states, enabling dynamic evaluation and correction of intermediate outputs. Through dense step-wise supervision, spatial-semantic consistency constraints, and strategies to preserve textual priors, the approach ensures interpretability, controllability, and plausibility of intermediate states throughout generation. Experiments demonstrate significant improvements in structural coherence and detail fidelity across multiple text-to-image benchmarks.
π Abstract
Humans paint images incrementally: they plan a global layout, sketch a coarse draft, inspect, and refine details, and most importantly, each step is grounded in the evolving visual states. However, can unified multimodal models trained on text-image interleaved datasets also imagine the chain of intermediate states? In this paper, we introduce process-driven image generation, a multi-step paradigm that decomposes synthesis into an interleaved reasoning trajectory of thoughts and actions. Rather than generating images in a single step, our approach unfolds across multiple iterations, each consisting of 4 stages: textual planning, visual drafting, textual reflection, and visual refinement. The textual reasoning explicitly conditions how the visual state should evolve, while the generated visual intermediate in turn constrains and grounds the next round of textual reasoning. A core challenge of process-driven generation stems from the ambiguity of intermediate states: how can models evaluate each partially-complete image? We address this through dense, step-wise supervision that maintains two complementary constraints: for the visual intermediate states, we enforce the spatial and semantic consistency; for the textual intermediate states, we preserve the prior visual knowledge while enabling the model to identify and correct prompt-violating elements. This makes the generation process explicit, interpretable, and directly supervisable. To validate proposed method, we conduct experiments under various text-to-image generation benchmarks.