🤖 AI Summary
Existing diffusion models excel at image generation but exhibit limited image understanding capabilities, necessitating separate models for generation and understanding tasks. This work introduces the first end-to-end multimodal diffusion framework unifying text-to-image generation, image captioning, and visual question answering within a single architecture. Methodologically, it (1) establishes the first joint diffusion modeling of image and text conditional likelihoods; (2) formulates a cross-modal maximum-likelihood objective, enabling simultaneous optimization of dual-branch diffusion Transformers under a unified loss; and (3) builds upon the MM-DiT architecture, integrating discrete language modeling with cross-modal diffusion training. Experiments demonstrate state-of-the-art performance among unified multimodal models on both generative and understanding benchmarks, validating that diffusion-based modeling can effectively supplant autoregressive language modeling in multimodal settings.
📝 Abstract
Diffusion models have gained tremendous success in text-to-image generation, yet still lag behind with visual understanding tasks, an area dominated by autoregressive vision-language models. We propose a large-scale and fully end-to-end diffusion model for multi-modal understanding and generation that significantly improves on existing diffusion-based multimodal models, and is the first of its kind to support the full suite of vision-language modeling capabilities. Inspired by the multimodal diffusion transformer (MM-DiT) and recent advances in discrete diffusion language modeling, we leverage a cross-modal maximum likelihood estimation framework that simultaneously trains the conditional likelihoods of both images and text jointly under a single loss function, which is back-propagated through both branches of the diffusion transformer. The resulting model is highly flexible and capable of a wide range of tasks including image generation, captioning, and visual question answering. Our model attained competitive performance compared to recent unified image understanding and generation models, demonstrating the potential of multimodal diffusion modeling as a promising alternative to autoregressive next-token prediction models.