🤖 AI Summary
Clinical deployment of deep learning models often encounters missing multimodal data (e.g., text, structured reports) during inference. To address this, we propose Multimodal Privileged Knowledge Distillation (MMPKD), a framework that leverages teacher-accessible textual and structured data—available only at training time—to guide a unimodal vision student model (based on Transformers) for improved lesion localization in chest X-ray and mammography diagnosis. Our work provides the first empirical evidence that cross-modal distillation significantly enhances attention map localization accuracy; however, this gain lacks cross-domain generalizability, thereby revising prior assumptions about its transferability. Experiments demonstrate that MMPKD effectively improves interpretability and diagnostic robustness of unimodal models under zero-shot settings, while also revealing its strong context dependence—a critical limitation for real-world clinical adoption.
📝 Abstract
Deploying deep learning models in clinical practice often requires leveraging multiple data modalities, such as images, text, and structured data, to achieve robust and trustworthy decisions. However, not all modalities are always available at inference time. In this work, we propose multimodal privileged knowledge distillation (MMPKD), a training strategy that utilizes additional modalities available solely during training to guide a unimodal vision model. Specifically, we used a text-based teacher model for chest radiographs (MIMIC-CXR) and a tabular metadata-based teacher model for mammography (CBIS-DDSM) to distill knowledge into a vision transformer student model. We show that MMPKD can improve the resulting attention maps' zero-shot capabilities of localizing ROI in input images, while this effect does not generalize across domains, as contrarily suggested by prior research.