VINO: A Unified Visual Generator with Interleaved OmniModal Context

๐Ÿ“… 2026-01-05
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a unified multimodal generative framework capable of jointly handling image and video generation and editing, overcoming the limitations of existing methods that rely on task- or modality-specific models. Built upon a shared diffusion backbone, the framework integrates text, images, and videos as conditioning inputs through an interleaved multimodal context modeling mechanism, enabling multi-reference alignment, long instruction following, and consistent identity preservation across static and dynamic contentโ€”without requiring modality-specific modules. By combining a vision-language model with a multimodal Diffusion Transformer and employing interleaved conditional tokens to guide the diffusion process, the approach is trained via a multi-stage strategy. Experiments demonstrate significant improvements in visual quality, instruction adherence, reference and attribute retention, and support for controllable multi-identity editing across multiple benchmarks.

Technology Category

Application Category

๐Ÿ“ Abstract
We present VINO, a unified visual generator that performs image and video generation and editing within a single framework. Instead of relying on task-specific models or independent modules for each modality, VINO uses a shared diffusion backbone that conditions on text, images and videos, enabling a broad range of visual creation and editing tasks under one model. Specifically, VINO couples a vision-language model (VLM) with a Multimodal Diffusion Transformer (MMDiT), where multimodal inputs are encoded as interleaved conditioning tokens, and then used to guide the diffusion process. This design supports multi-reference grounding, long-form instruction following, and coherent identity preservation across static and dynamic content, while avoiding modality-specific architectural components. To train such a unified system, we introduce a multi-stage training pipeline that progressively expands a video generation base model into a unified, multi-task generator capable of both image and video input and output. Across diverse generation and editing benchmarks, VINO demonstrates strong visual quality, faithful instruction following, improved reference and attribute preservation, and more controllable multi-identity edits. Our results highlight a practical path toward scalable unified visual generation, and the promise of interleaved, in-context computation as a foundation for general-purpose visual creation.
Problem

Research questions and friction points this paper is trying to address.

unified visual generation
image and video editing
multimodal conditioning
diffusion models
cross-modal consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Visual Generation
Interleaved Multimodal Conditioning
Multimodal Diffusion Transformer
Shared Diffusion Backbone
Multi-stage Training Pipeline
๐Ÿ”Ž Similar Papers
No similar papers found.