🤖 AI Summary
Existing generative models for 4D cardiac MRI (3D space + time) often suffer from structural distortions and physiological inconsistencies due to decoupled spatiotemporal modeling. To address this, this work proposes CardioDiT—the first fully 4D implicit diffusion framework based on a Diffusion Transformer—that enables end-to-end synthesis of short-axis cine cardiac MRI through unified spatiotemporal joint modeling without factorization. The method integrates a spatiotemporal VQ-VAE with a Diffusion Transformer to compactly encode 2D+t slices in a latent space and jointly generate full 3D+t volumes. Experiments demonstrate that CardioDiT significantly improves inter-slice consistency and temporal coherence on both public and private datasets, while more accurately reproducing the distribution of real cardiac functional dynamics.
📝 Abstract
Latent diffusion models (LDMs) have recently achieved strong performance in 3D medical image synthesis. However, modalities like cine cardiac MRI (CMR), representing a temporally synchronized 3D volume across the cardiac cycle, add an additional dimension that most generative approaches do not model directly. Instead, they factorize space and time or enforce temporal consistency through auxiliary mechanisms such as anatomical masks. Such strategies introduce structural biases that may limit global context integration and lead to subtle spatiotemporal discontinuities or physiologically inconsistent cardiac dynamics. We investigate whether a unified 4D generative model can learn continuous cardiac dynamics without architectural factorization. We propose CardioDiT, a fully 4D latent diffusion framework for short-axis cine CMR synthesis based on diffusion transformers. A spatiotemporal VQ-VAE encodes 2D+t slices into compact latents, which a diffusion transformer then models jointly as complete 3D+t volumes, coupling space and time throughout the generative process. We evaluate CardioDiT on public CMR datasets and a larger private cohort, comparing it to baselines with progressively stronger spatiotemporal coupling. Results show improved inter-slice consistency, temporally coherent motion, and realistic cardiac function distributions, suggesting that explicit 4D modeling with a diffusion transformer provides a principled foundation for spatiotemporal cardiac image synthesis. Code and models trained on public data are available at https://github.com/Cardio-AI/cardiodit.