π€ AI Summary
This work proposes VolDiT, the first pure Transformer-based 3D diffusion model for medical image synthesis, addressing the limitations of conventional convolutional U-Net architectures that struggle to capture global context due to their restricted receptive fields. VolDiT processes 3D volumes directly through voxel patch embeddings and leverages global self-attention to model long-range dependencies. To enable precise structural control, the model introduces a timestep-gated adapter that maps segmentation masks into learnable control tokens, facilitating token-level conditional modulation. Experiments demonstrate that VolDiT significantly outperforms U-Net baselines in high-resolution 3D medical image generation, achieving notable advances in global consistency, synthesis fidelity, and spatial controllability.
π Abstract
Diffusion models have become a leading approach for high-fidelity medical image synthesis. However, most existing methods for 3D medical image generation rely on convolutional U-Net backbones within latent diffusion frameworks. While effective, these architectures impose strong locality biases and limited receptive fields, which may constrain scalability, global context integration, and flexible conditioning. In this work, we introduce VolDiT, the first purely transformer-based 3D Diffusion Transformer for volumetric medical image synthesis. Our approach extends diffusion transformers to native 3D data through volumetric patch embeddings and global self-attention operating directly over 3D tokens. To enable structured control, we propose a timestep-gated control adapter that maps segmentation masks into learnable control tokens that modulate transformer layers during denoising. This token-level conditioning mechanism allows precise spatial guidance while preserving the modeling advantages of transformer architectures. We evaluate our model on high-resolution 3D medical image synthesis tasks and compare it to state-of-the-art 3D latent diffusion models based on U-Nets. Results demonstrate improved global coherence, superior generative fidelity, and enhanced controllability. Our findings suggest that fully transformerbased diffusion models provide a flexible foundation for volumetric medical image synthesis. The code and models trained on public data are available at https://github.com/Cardio-AI/voldit.