🤖 AI Summary
To address challenges in generalist 3D CT image segmentation—including underutilization of language prompts, weak multi-class generalization, and voxel-text information density imbalance—this paper proposes the first voxel-language alignment and interaction framework. Methodologically, it introduces (1) cross-modal representation alignment coupled with cosine-similarity-based voxel-wise classification; (2) a complexity-aware pseudo-heatmap generation mechanism that focuses on ambiguous regions via a learnable Gaussian mixture distribution; and (3) a lightweight differentiable interaction module. Evaluated on multi-source CT datasets, the method achieves state-of-the-art zero-shot transfer performance without fine-tuning, demonstrating strong generalization to unseen anatomical classes and scanner domains. It significantly reduces parameter count and training cost compared to prior approaches, while maintaining high accuracy and robustness across diverse clinical imaging conditions.
📝 Abstract
Satisfactory progress has been achieved recently in universal segmentation of CT images. Following the success of vision-language methods, there is a growing trend towards utilizing text prompts and contrastive learning to develop universal segmentation models. However, there exists a significant imbalance in information density between 3D images and text prompts. Moreover, the standard fully connected layer segmentation approach faces significant challenges in handling multiple classes and exhibits poor generalizability. To address these challenges, we propose the VOxel Interacting with LAnguage method (VOILA) for universal CT image segmentation. Initially, we align voxels and language into a shared representation space and classify voxels on the basis of cosine similarity. Subsequently, we develop the Voxel-Language Interaction framework to mitigate the impact of class imbalance caused by foreground-background discrepancies and variations in target volumes. Furthermore, a Complexity-Aware Sampling method is proposed to focus on region hard to segment, achieved by generating pseudo-heatmaps from a trainable Gaussian mixture distribution. Our results indicate the proposed VOILA is capable to achieve improved performance with reduced parameters and computational cost during training. Furthermore, it demonstrates significant generalizability across diverse datasets without additional fine-tuning.