Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces

๐Ÿ“… 2025-06-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing diffusion models for multimodal joint generation rely on pretrained tokenizers or VAEs to map heterogeneous data into a unified unimodal representation, rendering them sensitive to encoder-decoder fidelity and limiting generalization. This work introduces the first multimodal diffusion framework operable over arbitrary state spacesโ€”enabling text-image joint generation and hybrid tabular data synthesis without requiring modality pre-alignment. Our core innovation is a cross-modal decoupled noise scheduling mechanism, unifying unconditional generation with conditional generation given any subset of modalities within a single model. The method integrates multimodal diffusion modeling, state-space-agnostic stochastic differential equation solvers, and end-to-end joint optimization. Experiments demonstrate state-of-the-art performance on both tasks, significantly reducing dependence on large-scale labeled datasets and high-fidelity encoders/decoders.

Technology Category

Application Category

๐Ÿ“ Abstract
Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance.
Problem

Research questions and friction points this paper is trying to address.

Joint generation of multimodal data via diffusion models
Eliminating reliance on external preprocessing protocols
Enabling native generation across arbitrary state spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal diffusion models on arbitrary state spaces
Decoupled noise schedule for each modality
Native generation of coupled data across modalities
๐Ÿ”Ž Similar Papers
No similar papers found.
K
Kevin Rojas
School of Mathematics, Georgia Institute of Technology, Atlanta, GA; Machine Learning Center, Georgia Institute of Technology, Atlanta, GA
Yuchen Zhu
Yuchen Zhu
Georgia Institute of Technology
Diffusion ModelsDiscrete DiffusionVision-language Model
Sichen Zhu
Sichen Zhu
Georgia Institute of Technology
F
Felix X.-F. Ye
Department of Mathematics & Statistics, SUNY Albany, NY
Molei Tao
Molei Tao
Associate Professor, Georgia Institute of Technology
foundation of machine learningapplied & computational mathstochastic/nonlinear dynamics