🤖 AI Summary
Show-o suffers from low inference efficiency in bidirectional image-text generation due to sequential denoising of image tokens and autoregressive text decoding. To address this, we propose a unified multimodal denoising modeling framework—extending Consistency Distillation (CD) to multimodal denoising trajectories for the first time—and introduce trajectory segmentation and a curriculum learning strategy to accelerate cross-modal training convergence. Additionally, we design a parallel text decoder to replace autoregressive decoding. Experiments demonstrate that our method achieves text-to-image generation in only four sampling steps (GenEval = 0.625), surpassing the original Show-o’s eight-step + CFG performance without classifier-free guidance. For image-to-text generation, inference speed improves by 1.5× with negligible performance degradation. This work establishes a new paradigm for efficient, unified multimodal generation.
📝 Abstract
There has been increasing research interest in building unified multimodal understanding and generation models, among which Show-o stands as a notable representative, demonstrating great promise for both text-to-image and image-to-text generation. The inference of Show-o involves progressively denoising image tokens and autoregressively decoding text tokens, and hence, unfortunately, suffers from inefficiency issues from both sides. This paper introduces Show-o Turbo to bridge the gap. We first identify a unified denoising perspective for the generation of images and text in Show-o based on the parallel decoding of text tokens. We then propose to extend consistency distillation (CD), a qualified approach for shortening the denoising process of diffusion models, to the multimodal denoising trajectories of Show-o. We introduce a trajectory segmentation strategy and a curriculum learning procedure to improve the training convergence. Empirically, in text-to-image generation, Show-o Turbo displays a GenEval score of 0.625 at 4 sampling steps without using classifier-free guidance (CFG), outperforming that of the original Show-o with 8 steps and CFG; in image-to-text generation, Show-o Turbo exhibits a 1.5x speedup without significantly sacrificing performance. The code is available at https://github.com/zhijie-group/Show-o-Turbo.