Text-driven Multiplanar Visual Interaction for Semi-supervised Medical Image Segmentation

πŸ“… 2025-07-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address semi-supervised 3D medical image segmentation under limited labeled data, this paper proposes the first text-driven multi-planar interaction framework. Methodologically, it introduces a text–vision cross-modal collaboration mechanism comprising: (1) multi-planar text-enhanced representation; (2) a class-aware semantic alignment module to mitigate inter-modal semantic discrepancy; and (3) a dynamic cognitive enhancement module that enables deep feature-level interaction via learnable optimization variables. The key innovation lies in pioneering the integration of textual guidance into semi-supervised 3D medical image segmentation and establishing a structured multi-planar interaction paradigm. Extensive experiments on three public benchmarks demonstrate significant performance gains over state-of-the-art methods, validating the effectiveness of textual priors in enhancing visual semantic embedding. The source code is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Semi-supervised medical image segmentation is a crucial technique for alleviating the high cost of data annotation. When labeled data is limited, textual information can provide additional context to enhance visual semantic understanding. However, research exploring the use of textual data to enhance visual semantic embeddings in 3D medical imaging tasks remains scarce. In this paper, we propose a novel text-driven multiplanar visual interaction framework for semi-supervised medical image segmentation (termed Text-SemiSeg), which consists of three main modules: Text-enhanced Multiplanar Representation (TMR), Category-aware Semantic Alignment (CSA), and Dynamic Cognitive Augmentation (DCA). Specifically, TMR facilitates text-visual interaction through planar mapping, thereby enhancing the category awareness of visual features. CSA performs cross-modal semantic alignment between the text features with introduced learnable variables and the intermediate layer of visual features. DCA reduces the distribution discrepancy between labeled and unlabeled data through their interaction, thus improving the model's robustness. Finally, experiments on three public datasets demonstrate that our model effectively enhances visual features with textual information and outperforms other methods. Our code is available at https://github.com/taozh2017/Text-SemiSeg.
Problem

Research questions and friction points this paper is trying to address.

Enhance semi-supervised medical image segmentation using text
Align text and visual features for better semantic understanding
Reduce distribution gap between labeled and unlabeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-enhanced Multiplanar Representation for visual interaction
Category-aware Semantic Alignment for cross-modal learning
Dynamic Cognitive Augmentation to reduce data discrepancy
πŸ”Ž Similar Papers
No similar papers found.
Kaiwen Huang
Kaiwen Huang
Nanjing University of Science and Technology
Medical Image ProcessSemi-Supervised Learning
Y
Yi Zhou
School of Computer Science and Engineering, Southeast University, China.
Huazhu Fu
Huazhu Fu
Principal Scientist, IHPC, A*STAR
Medical Image AnalysisAI for HealthcareMedical AITrustworthy AI
Y
Yizhe Zhang
School of Computer Science and Engineering, Nanjing University of Science and Technology, China.
C
Chen Gong
School of Computer Science and Engineering, Nanjing University of Science and Technology, China.
T
Tao Zhou
School of Computer Science and Engineering, Nanjing University of Science and Technology, China.