Query-Kontext: An Unified Multimodal Model for Image Generation and Editing

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In current unified multimodal generation frameworks, generative reasoning capabilities—such as instruction following, visual grounding, and identity preservation—are tightly coupled with high-fidelity image synthesis, limiting model generalizability and controllability. To address this, we propose a decoupled unified framework featuring a novel multimodal “kontext” module that explicitly bridges vision-language models and diffusion models, separating reasoning from synthesis. Our method employs a three-stage progressive training strategy integrating a lightweight diffusion head, a large-scale pretrained diffusion backbone, and a low-level image encoder, followed by instruction tuning on diverse data—including real, synthetic, and open-source sources. Extensive experiments demonstrate that our approach achieves or surpasses state-of-the-art performance across multi-subject composition, personalized generation, and fine-grained instruction editing tasks, validating its effectiveness, robustness, and broad applicability.

Technology Category

Application Category

📝 Abstract
Unified Multimodal Models (UMMs) have demonstrated remarkable performance in text-to-image generation (T2I) and editing (TI2I), whether instantiated as assembled unified frameworks which couple powerful vision-language model (VLM) with diffusion-based generator, or as naive Unified Multimodal Models with an early fusion of understanding and generation modalities. We contend that in current unified frameworks, the crucial capability of multimodal generative reasoning which encompasses instruction understanding, grounding, and image referring for identity preservation and faithful reconstruction, is intrinsically entangled with high-fidelity synthesis. In this work, we introduce Query-Kontext, a novel approach that bridges the VLM and diffusion model via a multimodal ``kontext'' composed of semantic cues and coarse-grained image conditions encoded from multimodal inputs. This design delegates the complex ability of multimodal generative reasoning to powerful VLM while reserving diffusion model's role for high-quality visual synthesis. To achieve this, we propose a three-stage progressive training strategy. First, we connect the VLM to a lightweight diffusion head via multimodal kontext tokens to unleash the VLM's generative reasoning ability. Second, we scale this head to a large, pre-trained diffusion model to enhance visual detail and realism. Finally, we introduce a low-level image encoder to improve image fidelity and perform instruction tuning on downstream tasks. Furthermore, we build a comprehensive data pipeline integrating real, synthetic, and open-source datasets, covering diverse multimodal reference-to-image scenarios, including image generation, instruction-driven editing, customized generation, and multi-subject composition. Experiments show that our approach matches strong unified baselines and even outperforms task-specific state-of-the-art methods in several cases.
Problem

Research questions and friction points this paper is trying to address.

Bridging multimodal reasoning with high-fidelity image synthesis
Separating generative reasoning from visual synthesis in unified models
Enhancing image generation and editing via multimodal context tokens
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bridges VLM and diffusion via multimodal kontext tokens
Uses three-stage progressive training strategy
Integrates diverse datasets for multimodal reference tasks
🔎 Similar Papers
No similar papers found.