Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Show-o suffers from low inference efficiency in bidirectional image-text generation due to sequential denoising of image tokens and autoregressive text decoding. To address this, we propose a unified multimodal denoising modeling framework—extending Consistency Distillation (CD) to multimodal denoising trajectories for the first time—and introduce trajectory segmentation and a curriculum learning strategy to accelerate cross-modal training convergence. Additionally, we design a parallel text decoder to replace autoregressive decoding. Experiments demonstrate that our method achieves text-to-image generation in only four sampling steps (GenEval = 0.625), surpassing the original Show-o’s eight-step + CFG performance without classifier-free guidance. For image-to-text generation, inference speed improves by 1.5× with negligible performance degradation. This work establishes a new paradigm for efficient, unified multimodal generation.

Technology Category

Application Category

📝 Abstract
There has been increasing research interest in building unified multimodal understanding and generation models, among which Show-o stands as a notable representative, demonstrating great promise for both text-to-image and image-to-text generation. The inference of Show-o involves progressively denoising image tokens and autoregressively decoding text tokens, and hence, unfortunately, suffers from inefficiency issues from both sides. This paper introduces Show-o Turbo to bridge the gap. We first identify a unified denoising perspective for the generation of images and text in Show-o based on the parallel decoding of text tokens. We then propose to extend consistency distillation (CD), a qualified approach for shortening the denoising process of diffusion models, to the multimodal denoising trajectories of Show-o. We introduce a trajectory segmentation strategy and a curriculum learning procedure to improve the training convergence. Empirically, in text-to-image generation, Show-o Turbo displays a GenEval score of 0.625 at 4 sampling steps without using classifier-free guidance (CFG), outperforming that of the original Show-o with 8 steps and CFG; in image-to-text generation, Show-o Turbo exhibits a 1.5x speedup without significantly sacrificing performance. The code is available at https://github.com/zhijie-group/Show-o-Turbo.
Problem

Research questions and friction points this paper is trying to address.

Accelerates multimodal understanding and generation
Improves efficiency in denoising processes
Enhances text-to-image and image-to-text performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel decoding of text tokens
Consistency distillation for multimodal denoising
Trajectory segmentation and curriculum learning
🔎 Similar Papers
No similar papers found.
C
Chenkai Xu
Shanghai Jiao Tong University
X
Xu Wang
Shanghai Jiao Tong University
Z
Zhenyi Liao
Shanghai Jiao Tong University
Y
Yishun Li
Tongji University
Tianqi Hou
Tianqi Hou
Theory Lab, Central Research Institute, 2012 Labs, Huawei Technologies Co., Ltd.
statistical physicsmachine learning,high-dimensional statisticsComputational Neuroscience
Z
Zhijie Deng
Shanghai Jiao Tong University