No Modality Left Behind: Dynamic Model Generation for Incomplete Medical Data

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical multimodal medical imaging frequently suffers from missing modalities, and existing approaches—relying on data discarding, imputation, or fixed architectures—exhibit limited generalizability and robustness. To address this, we propose a hypernetwork-based dynamic model generation method: a conditional hypernetwork generates task-specific model weights in real time, tailored to the currently available modality combination, enabling end-to-end adaptive inference for arbitrary missing-modality configurations within a single unified model. Our approach eliminates the need for data imputation or modality discarding and is the first to achieve full combinatorial generalization across all possible modality subsets within a single framework. Under 25% data completeness, our method achieves an 8% accuracy improvement over the best baseline, significantly outperforming models trained on complete data, channel-dropping strategies, and imputation-based methods—demonstrating its effectiveness and practicality in real-world clinical settings.

Technology Category

Application Category

📝 Abstract
In real world clinical environments, training and applying deep learning models on multi-modal medical imaging data often struggles with partially incomplete data. Standard approaches either discard missing samples, require imputation or repurpose dropout learning schemes, limiting robustness and generalizability. To address this, we propose a hypernetwork-based method that dynamically generates task-specific classification models conditioned on the set of available modalities. Instead of training a fixed model, a hypernetwork learns to predict the parameters of a task model adapted to available modalities, enabling training and inference on all samples, regardless of completeness. We compare this approach with (1) models trained only on complete data, (2) state of the art channel dropout methods, and (3) an imputation-based method, using artificially incomplete datasets to systematically analyze robustness to missing modalities. Results demonstrate superior adaptability of our method, outperforming state of the art approaches with an absolute increase in accuracy of up to 8% when trained on a dataset with 25% completeness (75% of training data with missing modalities). By enabling a single model to generalize across all modality configurations, our approach provides an efficient solution for real-world multi-modal medical data analysis.
Problem

Research questions and friction points this paper is trying to address.

Handles incomplete multi-modal medical imaging data
Dynamically generates models for available modalities
Improves robustness and accuracy with missing data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hypernetwork generates task-specific models dynamically
Adapts to available medical imaging modalities
Outperforms state-of-the-art missing data methods
🔎 Similar Papers
No similar papers found.
C
Christoph Fürböck
Computational Imaging Research Lab, Department for Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Comprehensive Center for Artificial Intelligence in Medicine, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Machine Learning Driven Precision Imaging, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
P
Paul Weiser
Computational Imaging Research Lab, Department for Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts USA; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts USA
B
Branko Mitic
Computational Imaging Research Lab, Department for Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Comprehensive Center for Artificial Intelligence in Medicine, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Machine Learning Driven Precision Imaging, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
P
Philipp Seeböck
Computational Imaging Research Lab, Department for Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Comprehensive Center for Artificial Intelligence in Medicine, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Machine Learning Driven Precision Imaging, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
T
Thomas Helbich
Division of General and Pediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
Georg Langs
Georg Langs
Medical University of Vienna, CIR Lab
Machine Learning in NeuroImagingFunctional Connectivity