🤖 AI Summary
To address the high computational cost and loss of clinically relevant features in high-resolution medical imaging, this paper introduces MedVAE—the first general-purpose medical autoencoder family supporting both 2D and 3D modalities. Methodologically, it adopts a variational autoencoder (VAE) framework and proposes a novel two-stage large-scale pretraining paradigm, integrating multimodal normalization preprocessing with a synergistic 3D convolution–attention architecture. Trained uniformly across 20 cross-domain medical datasets comprising 1.05 million images, MedVAE achieves robust representation learning. Experiments demonstrate that its latent representations fully preserve diagnostically critical features; reconstruction fidelity matches original resolution (comparable PSNR/SSIM); and downstream task throughput improves by up to 70×. The code is publicly available and has been adopted by multiple clinical institutions.
📝 Abstract
Medical images are acquired at high resolutions with large fields of view in order to capture fine-grained features necessary for clinical decision-making. Consequently, training deep learning models on medical images can incur large computational costs. In this work, we address the challenge of downsizing medical images in order to improve downstream computational efficiency while preserving clinically-relevant features. We introduce MedVAE, a family of six large-scale 2D and 3D autoencoders capable of encoding medical images as downsized latent representations and decoding latent representations back to high-resolution images. We train MedVAE autoencoders using a novel two-stage training approach with 1,052,730 medical images. Across diverse tasks obtained from 20 medical image datasets, we demonstrate that (1) utilizing MedVAE latent representations in place of high-resolution images when training downstream models can lead to efficiency benefits (up to 70x improvement in throughput) while simultaneously preserving clinically-relevant features and (2) MedVAE can decode latent representations back to high-resolution images with high fidelity. Our work demonstrates that large-scale, generalizable autoencoders can help address critical efficiency challenges in the medical domain. Our code is available at https://github.com/StanfordMIMI/MedVAE.