🤖 AI Summary
Current medical AI models predominantly rely on task-specific conditional distribution modeling, limiting their ability to support flexible cross-modal and cross-task reasoning. To address this, we propose MetaVoxel—the first unified diffusion-based generative framework jointly modeling medical imaging (T1-weighted MRI) and structured clinical metadata (e.g., age, sex). MetaVoxel learns the multimodal joint distribution via a single end-to-end diffusion process, incorporating cross-modal embedding alignment and joint noise prediction to enable zero-shot inference from arbitrary input subsets. Evaluated on over 10,000 MRI scans from nine heterogeneous sources, a single MetaVoxel model simultaneously achieves image synthesis, continuous age estimation, and binary sex classification—matching or surpassing dedicated task-specific baselines. This breaks away from conventional conditional modeling paradigms, markedly enhancing model generalizability and deployment efficiency.
📝 Abstract
Modern deep learning methods have achieved impressive results across tasks from disease classification, estimating continuous biomarkers, to generating realistic medical images. Most of these approaches are trained to model conditional distributions defined by a specific predictive direction with a specific set of input variables. We introduce MetaVoxel, a generative joint diffusion modeling framework that models the joint distribution over imaging data and clinical metadata by learning a single diffusion process spanning all variables. By capturing the joint distribution, MetaVoxel unifies tasks that traditionally require separate conditional models and supports flexible zero-shot inference using arbitrary subsets of inputs without task-specific retraining. Using more than 10,000 T1-weighted MRI scans paired with clinical metadata from nine datasets, we show that a single MetaVoxel model can perform image generation, age estimation, and sex prediction, achieving performance comparable to established task-specific baselines. Additional experiments highlight its capabilities for flexible inference.Together, these findings demonstrate that joint multimodal diffusion offers a promising direction for unifying medical AI models and enabling broader clinical applicability.