🤖 AI Summary
This work addresses the challenge of unifying multimodal understanding, text-to-image generation, and image editing within a single model while surpassing state-of-the-art (SOTA) performance. To this end, we propose Ovis-U1—a 3B-parameter unified multimodal model—introducing a novel end-to-end training paradigm: “language model backbone + diffusion-based visual decoder + bidirectional token optimizer.” This architecture eliminates rigid boundaries between understanding and generation tasks, enabling cross-modal co-optimization. Built upon the Ovis framework, Ovis-U1 jointly learns all three capabilities in a shared parameter space. On the OpenCompass multimodal leaderboard, it achieves 69.6 points; on DPG-Bench and GenEval for generative evaluation, it scores 83.72 and 0.89, respectively. Moreover, it significantly outperforms mainstream methods on image editing benchmarks.
📝 Abstract
In this report, we introduce Ovis-U1, a 3-billion-parameter unified model that integrates multimodal understanding, text-to-image generation, and image editing capabilities. Building on the foundation of the Ovis series, Ovis-U1 incorporates a diffusion-based visual decoder paired with a bidirectional token refiner, enabling image generation tasks comparable to leading models like GPT-4o. Unlike some previous models that use a frozen MLLM for generation tasks, Ovis-U1 utilizes a new unified training approach starting from a language model. Compared to training solely on understanding or generation tasks, unified training yields better performance, demonstrating the enhancement achieved by integrating these two tasks. Ovis-U1 achieves a score of 69.6 on the OpenCompass Multi-modal Academic Benchmark, surpassing recent state-of-the-art models such as Ristretto-3B and SAIL-VL-1.5-2B. In text-to-image generation, it excels with scores of 83.72 and 0.89 on the DPG-Bench and GenEval benchmarks, respectively. For image editing, it achieves 4.00 and 6.42 on the ImgEdit-Bench and GEdit-Bench-EN, respectively. As the initial version of the Ovis unified model series, Ovis-U1 pushes the boundaries of multimodal understanding, generation, and editing.