🤖 AI Summary
Existing multimodal generation methods predominantly rely on text-driven approaches, struggling to uniformly handle heterogeneous inputs such as textual prompts, layout specifications, and editing instructions. This work proposes a vision-centric paradigm that unifies all input modalities into visual prompts, reframing multimodal generation as a purely visual flow matching task. The resulting end-to-end image-to-image framework operates without cross-modal alignment or task-specific branches. Leveraging a newly curated VisPrompt-5M dataset and the VP-Bench evaluation benchmark, the proposed method significantly outperforms both leading open-source and commercial systems across multiple unified generation tasks, demonstrating the efficacy and superiority of an all-vision generative paradigm.
📝 Abstract
Multimodal generation has long been dominated by text-driven pipelines where language dictates vision but cannot reason or create within it. We challenge this paradigm by asking whether all modalities, including textual descriptions, spatial layouts, and editing instructions, can be unified into a single visual representation. We present FlowInOne, a framework that reformulates multimodal generation as a purely visual flow, converting all inputs into visual prompts and enabling a clean image-in, image-out pipeline governed by a single flow matching model. This vision-centric formulation naturally eliminates cross-modal alignment bottlenecks, noise scheduling, and task-specific architectural branches, unifying text-to-image generation, layout-guided editing, and visual instruction following under one coherent paradigm. To support this, we introduce VisPrompt-5M, a large-scale dataset of 5 million visual prompt pairs spanning diverse tasks including physics-aware force dynamics and trajectory prediction, alongside VP-Bench, a rigorously curated benchmark assessing instruction faithfulness, spatial precision, visual realism, and content consistency. Extensive experiments demonstrate that FlowInOne achieves state-of-the-art performance across all unified generation tasks, surpassing both open-source models and competitive commercial systems, establishing a new foundation for fully vision-centric generative modeling where perception and creation coexist within a single continuous visual space.