MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited performance and high deployment costs of models trained on few-shot medical multimodal data, this paper proposes a knowledge distillation–based lightweight multimodal learning framework. Our method employs heterogeneous unimodal teacher models to collaboratively guide a compact multimodal student network via a novel modality-aware distillation mechanism. It further introduces a multi-head joint fusion architecture enabling end-to-end unified modeling under modality missing—without requiring imputation—and integrates cross-modal representation alignment with multi-task distillation loss. Evaluated across five clinical and non-clinical multimodal tasks, our framework consistently outperforms state-of-the-art methods. It significantly enhances the performance of lightweight models in binary classification, multi-label prediction, and cross-domain generalization, demonstrating superior efficiency and robustness in resource-constrained medical AI scenarios.

Technology Category

Application Category

📝 Abstract
Multimodal fusion leverages information across modalities to learn better feature representations with the goal of improving performance in fusion-based tasks. However, multimodal datasets, especially in medical settings, are typically smaller than their unimodal counterparts, which can impede the performance of multimodal models. Additionally, the increase in the number of modalities is often associated with an overall increase in the size of the multimodal network, which may be undesirable in medical use cases. Utilizing smaller unimodal encoders may lead to sub-optimal performance, particularly when dealing with high-dimensional clinical data. In this paper, we propose the Modality-INformed knowledge Distillation (MIND) framework, a multimodal model compression approach based on knowledge distillation that transfers knowledge from ensembles of pre-trained deep neural networks of varying sizes into a smaller multimodal student. The teacher models consist of unimodal networks, allowing the student to learn from diverse representations. MIND employs multi-head joint fusion models, as opposed to single-head models, enabling the use of unimodal encoders in the case of unimodal samples without requiring imputation or masking of absent modalities. As a result, MIND generates an optimized multimodal model, enhancing both multimodal and unimodal representations. It can also be leveraged to balance multimodal learning during training. We evaluate MIND on binary and multilabel clinical prediction tasks using time series data and chest X-ray images. Additionally, we assess the generalizability of the MIND framework on three non-medical multimodal multiclass datasets. Experimental results demonstrate that MIND enhances the performance of the smaller multimodal network across all five tasks, as well as various fusion methods and multimodal architectures, compared to state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Data
Model Size Limitation
Complex Medical Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

MIND
Multi-modal Learning
Performance Superiority
🔎 Similar Papers
No similar papers found.