🤖 AI Summary
Existing medical image generation methods face limitations in architectural efficiency, multi-organ coverage, and scalability, hindering their applicability to large-scale clinical settings. This work proposes MedVAR, the first autoregressive foundation model for medical image generation, which introduces a novel integration of multi-scale hierarchical structure with a next-scale prediction paradigm to generate structured medical image representations in a coarse-to-fine manner. Trained on a standardized six-anatomy dataset comprising 440,000 CT and MRI images, MedVAR achieves state-of-the-art performance across multiple metrics, significantly improving generation fidelity, diversity, and scalability. The model establishes a new direction for foundation models in medical image synthesis, demonstrating strong potential for broad clinical deployment.
📝 Abstract
Medical image generation is pivotal in applications like data augmentation for low-resource clinical tasks and privacy-preserving data sharing. However, developing a scalable generative backbone for medical imaging requires architectural efficiency, sufficient multi-organ data, and principled evaluation, yet current approaches leave these aspects unresolved. Therefore, we introduce MedVAR, the first autoregressive-based foundation model that adopts the next-scale prediction paradigm to enable fast and scale-up-friendly medical image synthesis. MedVAR generates images in a coarse-to-fine manner and produces structured multi-scale representations suitable for downstream use. To support hierarchical generation, we curate a harmonized dataset of around 440,000 CT and MRI images spanning six anatomical regions. Comprehensive experiments across fidelity, diversity, and scalability show that MedVAR achieves state-of-the-art generative performance and offers a promising architectural direction for future medical generative foundation models.