🤖 AI Summary
This work addresses the challenge of simultaneously handling missing modalities and personalization demands in federated multi-modal medical image segmentation. To this end, the authors propose FedMEPD, a novel framework that assigns a dedicated encoder to each modality to accommodate heterogeneous modality availability across clients, and introduces a partially personalized fusion decoder. This decoder leverages global multi-modal representation anchors and cross-attention mechanisms to effectively compensate for missing modality information. As the first approach to jointly tackle modality heterogeneity and personalization under the federated learning paradigm, FedMEPD demonstrates significant performance gains over existing methods on the BraTS 2018 and 2020 datasets, validating its effectiveness and superiority in personalized federated multi-modal learning.
📝 Abstract
Most existing federated learning (FL) methods for medical image analysis only considered intramodal heterogeneity, limiting their applicability to multimodal imaging applications. In practice, some FL participants may possess only a subset of the complete imaging modalities, posing intermodal heterogeneity as a challenge to effectively training a global model on all participants' data. Meanwhile, each participant expects a personalized model tailored to its local data characteristics in FL. This work proposes a new FL framework with federated modality-specific encoders and partially personalized multimodal fusion decoders (FedMEPD) to address the two concurrent issues. Specifically, FedMEPD employs an exclusive encoder for each modality to account for the intermodal heterogeneity. While these encoders are fully federated, the decoders are partially personalized to meet individual needs-using the discrepancy between global and local parameter updates to dynamically determine which decoder filters are personalized. Implementation-wise, a server with full-modal data employs a fusion decoder to fuse representations from all modality-specific encoders, thus bridging the modalities to optimize the encoders via backpropagation. Moreover, multiple anchors are extracted from the fused multimodal representations and distributed to the clients in addition to the model parameters. Conversely, the clients with incomplete modalities calibrate their missing-modal representations toward the global full-modal anchors via scaled dot-product cross-attention, making up for the information loss due to absent modalities. FedMEPD is validated on the BraTS 2018 and 2020 multimodal brain tumor segmentation benchmarks. Results show that it outperforms various up-to-date methods for multimodal and personalized FL, and its novel designs are effective.