Liquid: Language Models are Scalable and Unified Multi-modal Generators

📅 2024-12-05
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses key limitations in multimodal generative models—namely, reliance on external vision encoders and modality interference causing performance degradation. We propose the *Liquid* paradigm: a unified large language model (LLM) architecture that jointly handles visual understanding and generation without dedicated vision encoders. Visual inputs are tokenized via image patch quantization, enabling vision–language co-embedding within a shared token space. We empirically discover a novel scaling law: multimodal joint-training performance degradation vanishes as model size increases, yielding cross-modal positive synergy. The framework is lightweight-adapted from open-source LLMs (e.g., Qwen2.5, Gemma2) and trained end-to-end autoregressively. Experiments show state-of-the-art text-to-image generation (FID = 5.47, surpassing Stable Diffusion v2.1/XL), language capabilities on par with LLaMA2, 100× lower training cost, and superior multimodal performance over Chameleon.

Technology Category

Application Category

📝 Abstract
We present Liquid, an auto-regressive generation paradigm that seamlessly integrates visual comprehension and generation by tokenizing images into discrete codes and learning these code embeddings alongside text tokens within a shared feature space for both vision and language. Unlike previous multimodal large language model (MLLM), Liquid achieves this integration using a single large language model (LLM), eliminating the need for external pretrained visual embeddings such as CLIP. For the first time, Liquid uncovers a scaling law that performance drop unavoidably brought by the unified training of visual and language tasks diminishes as the model size increases. Furthermore, the unified token space enables visual generation and comprehension tasks to mutually enhance each other, effectively removing the typical interference seen in earlier models. We show that existing LLMs can serve as strong foundations for Liquid, saving 100x in training costs while outperforming Chameleon in multimodal capabilities and maintaining language performance comparable to mainstream LLMs like LLAMA2. Liquid also outperforms models like SD v2.1 and SD-XL (FID of 5.47 on MJHQ-30K), excelling in both vision-language and text-only tasks. This work demonstrates that LLMs such as Qwen2.5 and GEMMA2 are powerful multimodal generators, offering a scalable solution for enhancing both vision-language understanding and generation. The code and models will be released at https://github.com/FoundationVision/Liquid.
Problem

Research questions and friction points this paper is trying to address.

Integrates visual and language tasks in one model.
Eliminates need for external visual embeddings.
Shows performance improves with larger model size.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified token space for vision-language tasks
Single LLM eliminates external visual embeddings
Scaling law reduces performance drop with size
🔎 Similar Papers
No similar papers found.