🤖 AI Summary
This work addresses the lack of a unified benchmark for medical multimodal federated learning (MMFL) that supports multimodal, multitask, and realistic heterogeneous settings, which has hindered systematic research in this area. To bridge this gap, we introduce Med-MMFL, the first comprehensive benchmark tailored to the medical domain, encompassing 2–4 modalities—including histopathology images, X-rays, ECG, MRI, and clinical text—across ten diverse medical datasets and multiple federated configurations (naturally partitioned, synthetic IID/non-IID). We evaluate six state-of-the-art federated algorithms on tasks such as segmentation, classification, modality alignment, and visual question answering, integrating various aggregation strategies, loss functions, and regularization techniques. The entire pipeline—including data preprocessing, partitioning, and training—is open-sourced to provide a reproducible and standardized evaluation platform for MMFL research.
📝 Abstract
Federated learning (FL) enables collaborative model training across decentralized medical institutions while preserving data privacy. However, medical FL benchmarks remain scarce, with existing efforts focusing mainly on unimodal or bimodal modalities and a limited range of medical tasks. This gap underscores the need for standardized evaluation to advance systematic understanding in medical MultiModal FL (MMFL). To this end, we introduce Med-MMFL, the first comprehensive MMFL benchmark for the medical domain, encompassing diverse modalities, tasks, and federation scenarios. Our benchmark evaluates six representative state-of-the-art FL algorithms, covering different aggregation strategies, loss formulations, and regularization techniques. It spans datasets with 2 to 4 modalities, comprising a total of 10 unique medical modalities, including text, pathology images, ECG, X-ray, radiology reports, and multiple MRI sequences. Experiments are conducted across naturally federated, synthetic IID, and synthetic non-IID settings to simulate real-world heterogeneity. We assess segmentation, classification, modality alignment (retrieval), and VQA tasks. To support reproducibility and fair comparison of future multimodal federated learning (MMFL) methods under realistic medical settings, we release the complete benchmark implementation, including data processing and partitioning pipelines, at https://github.com/bhattarailab/Med-MMFL-Benchmark .