Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

πŸ“… 2025-05-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses fetal organ identification in ultrasound imaging through a multimodal learning framework integrating ultrasound images and clinical text. Methodologically, it first initializes and fine-tunes a vision backbone on ultrasound images using batch-wise augmentation; subsequently, it jointly trains a classification head on aligned image–text features via a dynamically augmented data loader and a medical-imaging-specialized initialization scheme. The key contributions are: (i) the first introduction of a synergistic training paradigm combining unimodal pre-finetuning, batch-wise augmentation, and multimodal fusion; and (ii) a large language model (LLM) architecture explicitly adapted to the ultrasound domain. Evaluated on FPU23 and UPMC Food-101 benchmarks, the method achieves near-state-of-the-art performance. All code and baseline implementations are publicly released.

Technology Category

Application Category

πŸ“ Abstract
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal
Problem

Research questions and friction points this paper is trying to address.

Detect fetus organs from ultrasound images and clinical text
Improve multimodal learning with batch augmentation and fine-tuning
Enhance generalization using random augmentation per batch
Innovation

Methods, ideas, or system contributions that make the work stand out.

Batch augmentation with unimodal fine-tuning
Pre-training initial layers with medical data
Combining image and text features for training
πŸ”Ž Similar Papers
No similar papers found.