🤖 AI Summary
Medical multimodal AI is hindered by the scarcity of high-quality heterogeneous data, particularly in dermatology, where image datasets lack rich clinical textual annotations—limiting model robustness and generalization. To address this, we propose a fine-tuning-free prompt engineering framework that leverages structured medical metadata (e.g., lesion location, age, sex) to guide large language models in generating high-fidelity, low-hallucination synthetic clinical notes. This approach is the first to enable image-to-text cross-modal retrieval solely through prompt design. Evaluated across multiple dermatological benchmarks, the synthetic notes significantly improve multimodal classification accuracy (+3.2–7.8%), with even greater gains under domain shift. Our core innovation lies in embedding clinical priors directly into the prompting mechanism—ensuring both clinical plausibility and modeling efficacy—thereby bridging the modality gap without architectural modification or parameter updates.
📝 Abstract
Multimodal (MM) learning is emerging as a promising paradigm in biomedical artificial intelligence (AI) applications, integrating complementary modality, which highlight different aspects of patient health. The scarcity of large heterogeneous biomedical MM data has restrained the development of robust models for medical AI applications. In the dermatology domain, for instance, skin lesion datasets typically include only images linked to minimal metadata describing the condition, thereby limiting the benefits of MM data integration for reliable and generalizable predictions. Recent advances in Large Language Models (LLMs) enable the synthesis of textual description of image findings, potentially allowing the combination of image and text representations. However, LLMs are not specifically trained for use in the medical domain, and their naive inclusion has raised concerns about the risk of hallucinations in clinically relevant contexts. This work investigates strategies for generating synthetic textual clinical notes, in terms of prompt design and medical metadata inclusion, and evaluates their impact on MM architectures toward enhancing performance in classification and cross-modal retrieval tasks. Experiments across several heterogeneous dermatology datasets demonstrate that synthetic clinical notes not only enhance classification performance, particularly under domain shift, but also unlock cross-modal retrieval capabilities, a downstream task that is not explicitly optimized during training.