Learning to Generate via Understanding: Understanding-Driven Intrinsic Rewarding for Unified Multimodal Models

πŸ“… 2026-03-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the significant performance gap between visual understanding and generation in unified multimodal models, where generative capabilities often lag behind comprehension, particularly in producing semantically coherent images from complex textual prompts. To bridge this gap, the authors propose the Generation via Understanding (GvU) mechanism, which uniquely leverages the model’s own understanding capacity as an intrinsic reward signal during generation, establishing a self-supervised, understanding-driven feedback framework without external supervision. By integrating token-level image-text alignment rewards with self-supervised reinforcement learning, the approach enables joint optimization of understanding and generation within a unified architecture. Experiments demonstrate that GvU substantially improves image generation quality while simultaneously enhancing fine-grained visual understanding, effectively narrowing the performance disparity between the two modalities.

Technology Category

Application Category

πŸ“ Abstract
Recently, unified multimodal models (UMMs) have made remarkable progress in integrating visual understanding and generation, demonstrating strong potential for complex text-to-image (T2I) tasks. Despite their theoretical promise, a persistent capability gap exists: UMMs typically exhibit superior visual understanding but comparatively weaker generative capabilities. This discrepancy arises largely from the intrinsic decoupling between the understanding and generation processes. While a UMM can accurately interpret fine-grained visual details, it often struggles to produce semantically coherent images from complex textual prompts. To address this challenge, we explore UMMs'internal understanding capability to enhance generation quality. We propose a token-level intrinsic text-image alignment reward mechanism, GvU, enabling the UMM to act simultaneously as teacher and student: it evaluates its own outputs using the understanding branch to guide the generations accordingly. Building upon this, we design a self-supervised reinforcement learning framework, allowing UMMs to iteratively improve their generation quality through understanding-based intrinsic reward signals--without reliance on external supervision. Experimental results show that our method substantially boosts UMMs'generation, which in turn strengthens their fine-grained visual understanding, narrowing the capability gap between UMMs'visual understanding and generation.
Problem

Research questions and friction points this paper is trying to address.

unified multimodal models
visual understanding
image generation
capability gap
text-to-image
Innovation

Methods, ideas, or system contributions that make the work stand out.

intrinsic reward
unified multimodal models
self-supervised reinforcement learning
text-image alignment
understanding-driven generation
πŸ”Ž Similar Papers
No similar papers found.