π€ AI Summary
This work addresses fetal organ identification in ultrasound imaging through a multimodal learning framework integrating ultrasound images and clinical text. Methodologically, it first initializes and fine-tunes a vision backbone on ultrasound images using batch-wise augmentation; subsequently, it jointly trains a classification head on aligned imageβtext features via a dynamically augmented data loader and a medical-imaging-specialized initialization scheme. The key contributions are: (i) the first introduction of a synergistic training paradigm combining unimodal pre-finetuning, batch-wise augmentation, and multimodal fusion; and (ii) a large language model (LLM) architecture explicitly adapted to the ultrasound domain. Evaluated on FPU23 and UPMC Food-101 benchmarks, the method achieves near-state-of-the-art performance. All code and baseline implementations are publicly released.
π Abstract
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal