FlowTok: Flowing Seamlessly Across Text and Image Tokens

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of bridging the fundamental disparity between text (1D semantic symbols) and images (2D spatially redundant representations) for efficient bidirectional cross-modal generation. We propose a novel flow-matching paradigm based on 1D tokens: images are encoded into compact 1D token sequences aligned with text tokens in a shared latent space, and lightweight Transformer decoders perform end-to-end flow matching—eliminating conventional noise scheduling and complex conditional mechanisms. Key contributions include: (i) the first unified framework enabling both text-to-image and image-to-text generation; (ii) a 3.3× latent-space compression at 256×256 resolution; (iii) significantly reduced training cost and accelerated sampling; and (iv) state-of-the-art generation quality across standard benchmarks.

Technology Category

Application Category

📝 Abstract
Bridging different modalities lies at the heart of cross-modality generation. While conventional approaches treat the text modality as a conditioning signal that gradually guides the denoising process from Gaussian noise to the target image modality, we explore a much simpler paradigm-directly evolving between text and image modalities through flow matching. This requires projecting both modalities into a shared latent space, which poses a significant challenge due to their inherently different representations: text is highly semantic and encoded as 1D tokens, whereas images are spatially redundant and represented as 2D latent embeddings. To address this, we introduce FlowTok, a minimal framework that seamlessly flows across text and images by encoding images into a compact 1D token representation. Compared to prior methods, this design reduces the latent space size by 3.3x at an image resolution of 256, eliminating the need for complex conditioning mechanisms or noise scheduling. Moreover, FlowTok naturally extends to image-to-text generation under the same formulation. With its streamlined architecture centered around compact 1D tokens, FlowTok is highly memory-efficient, requires significantly fewer training resources, and achieves much faster sampling speeds-all while delivering performance comparable to state-of-the-art models. Code will be available at https://github.com/bytedance/1d-tokenizer.
Problem

Research questions and friction points this paper is trying to address.

Bridging text and image modalities for cross-modality generation.
Projecting text and images into a shared latent space.
Reducing latent space size and simplifying generation process.
Innovation

Methods, ideas, or system contributions that make the work stand out.

FlowTok bridges text and image modalities directly.
Encodes images into compact 1D token representations.
Reduces latent space size, enhancing memory efficiency.
🔎 Similar Papers
No similar papers found.