π€ AI Summary
In medical image segmentation, multimodal models suffer from insufficient generalization due to semantic gaps and feature discreteness between abstract textual prompts and fine-grained visual features. To address this, we propose EM-CLIPβa cross-modal alignment framework integrating Expectation-Maximization (EM) clustering with text-guided decoding. Its core innovations are: (1) dynamic EM clustering to compactly aggregate visual features into transferable, domain-invariant semantic centroids; and (2) a text-guided pixel-level decoder that leverages linguistic priors to modulate visual attention, explicitly bridging the modality-level semantic gap. Evaluated on multiple multi-center cardiac and fundus datasets, EM-CLIP consistently outperforms state-of-the-art methods, demonstrating superior robustness and generalization in cross-domain segmentation tasks.
π Abstract
Multimodal models have achieved remarkable success in natural image segmentation, yet they often underperform when applied to the medical domain. Through extensive study, we attribute this performance gap to the challenges of multimodal fusion, primarily the significant semantic gap between abstract textual prompts and fine-grained medical visual features, as well as the resulting feature dispersion. To address these issues, we revisit the problem from the perspective of semantic aggregation. Specifically, we propose an Expectation-Maximization (EM) Aggregation mechanism and a Text-Guided Pixel Decoder. The former mitigates feature dispersion by dynamically clustering features into compact semantic centers to enhance cross-modal correspondence. The latter is designed to bridge the semantic gap by leveraging domain-invariant textual knowledge to effectively guide deep visual representations. The synergy between these two mechanisms significantly improves the model's generalization ability. Extensive experiments on public cardiac and fundus datasets demonstrate that our method consistently outperforms existing SOTA approaches across multiple domain generalization benchmarks.