🤖 AI Summary
This work addresses the limitations of autoregressive multimodal models—namely, strong sequential dependencies, high training costs, and weak cross-modal coordination. We propose the first non-autoregressive multimodal model supporting variable-length, concurrent text–image generation. Our method innovatively integrates insertion-based edit flow to model discrete text tokens and flow matching to model image latent variables, enabling parallel, interleaved generation and iterative refinement across a unified cross-modal latent space. A hierarchical sampling mechanism prioritizes structural content modeling, thereby relaxing strict generation ordering constraints. Evaluated across 1B–8B parameter scales, our model consistently outperforms autoregressive baselines on both multimodal generation and understanding tasks, reduces training FLOPs by up to 50%, and surpasses state-of-the-art diffusion and mainstream autoregressive approaches in overall performance.
📝 Abstract
We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation.