π€ AI Summary
This work addresses the significant performance gap between visual understanding and generation in unified multimodal models, where generative capabilities often lag behind comprehension, particularly in producing semantically coherent images from complex textual prompts. To bridge this gap, the authors propose the Generation via Understanding (GvU) mechanism, which uniquely leverages the modelβs own understanding capacity as an intrinsic reward signal during generation, establishing a self-supervised, understanding-driven feedback framework without external supervision. By integrating token-level image-text alignment rewards with self-supervised reinforcement learning, the approach enables joint optimization of understanding and generation within a unified architecture. Experiments demonstrate that GvU substantially improves image generation quality while simultaneously enhancing fine-grained visual understanding, effectively narrowing the performance disparity between the two modalities.
π Abstract
Recently, unified multimodal models (UMMs) have made remarkable progress in integrating visual understanding and generation, demonstrating strong potential for complex text-to-image (T2I) tasks. Despite their theoretical promise, a persistent capability gap exists: UMMs typically exhibit superior visual understanding but comparatively weaker generative capabilities. This discrepancy arises largely from the intrinsic decoupling between the understanding and generation processes. While a UMM can accurately interpret fine-grained visual details, it often struggles to produce semantically coherent images from complex textual prompts. To address this challenge, we explore UMMs'internal understanding capability to enhance generation quality. We propose a token-level intrinsic text-image alignment reward mechanism, GvU, enabling the UMM to act simultaneously as teacher and student: it evaluates its own outputs using the understanding branch to guide the generations accordingly. Building upon this, we design a self-supervised reinforcement learning framework, allowing UMMs to iteratively improve their generation quality through understanding-based intrinsic reward signals--without reliance on external supervision. Experimental results show that our method substantially boosts UMMs'generation, which in turn strengthens their fine-grained visual understanding, narrowing the capability gap between UMMs'visual understanding and generation.