Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the semantic gap between vision and language modalities by proposing a unified cross-modal framework for joint visual understanding and generation. To this end, we introduce a Text-Aligned Tokenizer (TA-Tok) and a scale-adaptive encoder-decoder mechanism to map images into discrete semantic representations aligned with the vocabulary space of large language models. We further design a generative detokenizer that integrates autoregressive modeling with diffusion priors to enable high-fidelity image reconstruction. The model supports bidirectional multimodal input-output (image↔text) without modality-specific architectures. Key innovations—including multimodal pretraining, dual-path detokenization, and advanced pretraining objectives—significantly improve training efficiency and convergence speed. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, achieving both robust multimodal understanding and high-quality visual generation.

Technology Category

Application Category

📝 Abstract
This paper presents a multimodal framework that attempts to unify visual understanding and generation within a shared discrete semantic representation. At its core is the Text-Aligned Tokenizer (TA-Tok), which converts images into discrete tokens using a text-aligned codebook projected from a large language model's (LLM) vocabulary. By integrating vision and text into a unified space with an expanded vocabulary, our multimodal LLM, Tar, enables cross-modal input and output through a shared interface, without the need for modality-specific designs. Additionally, we propose scale-adaptive encoding and decoding to balance efficiency and visual detail, along with a generative de-tokenizer to produce high-fidelity visual outputs. To address diverse decoding needs, we utilize two complementary de-tokenizers: a fast autoregressive model and a diffusion-based model. To enhance modality fusion, we investigate advanced pre-training tasks, demonstrating improvements in both visual understanding and generation. Experiments across benchmarks show that Tar matches or surpasses existing multimodal LLM methods, achieving faster convergence and greater training efficiency. Code, models, and data are available at https://tar.csuhan.com
Problem

Research questions and friction points this paper is trying to address.

Unify visual understanding and generation via text-aligned representations
Balance efficiency and visual detail with scale-adaptive encoding
Enhance modality fusion through advanced pre-training tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-Aligned Tokenizer unifies vision and text
Scale-adaptive encoding balances efficiency and detail
Dual de-tokenizers enable diverse visual outputs
🔎 Similar Papers
No similar papers found.