π€ AI Summary
To address semi-supervised 3D medical image segmentation under limited labeled data, this paper proposes the first text-driven multi-planar interaction framework. Methodologically, it introduces a textβvision cross-modal collaboration mechanism comprising: (1) multi-planar text-enhanced representation; (2) a class-aware semantic alignment module to mitigate inter-modal semantic discrepancy; and (3) a dynamic cognitive enhancement module that enables deep feature-level interaction via learnable optimization variables. The key innovation lies in pioneering the integration of textual guidance into semi-supervised 3D medical image segmentation and establishing a structured multi-planar interaction paradigm. Extensive experiments on three public benchmarks demonstrate significant performance gains over state-of-the-art methods, validating the effectiveness of textual priors in enhancing visual semantic embedding. The source code is publicly available.
π Abstract
Semi-supervised medical image segmentation is a crucial technique for alleviating the high cost of data annotation. When labeled data is limited, textual information can provide additional context to enhance visual semantic understanding. However, research exploring the use of textual data to enhance visual semantic embeddings in 3D medical imaging tasks remains scarce. In this paper, we propose a novel text-driven multiplanar visual interaction framework for semi-supervised medical image segmentation (termed Text-SemiSeg), which consists of three main modules: Text-enhanced Multiplanar Representation (TMR), Category-aware Semantic Alignment (CSA), and Dynamic Cognitive Augmentation (DCA). Specifically, TMR facilitates text-visual interaction through planar mapping, thereby enhancing the category awareness of visual features. CSA performs cross-modal semantic alignment between the text features with introduced learnable variables and the intermediate layer of visual features. DCA reduces the distribution discrepancy between labeled and unlabeled data through their interaction, thus improving the model's robustness. Finally, experiments on three public datasets demonstrate that our model effectively enhances visual features with textual information and outperforms other methods. Our code is available at https://github.com/taozh2017/Text-SemiSeg.