Coordinative Learning with Ordinal and Relational Priors for Volumetric Medical Image Segmentation

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Volumetric medical image segmentation faces dual challenges of complex anatomical structures and scarce annotated data. Conventional methods rely on rigid binary thresholds to define positive/negative samples, neglecting the continuous nature of anatomical similarity and the global directional consistency of anatomical evolution across patients—leading to distorted feature representations. To address this, we propose CORAL, a cooperative learning framework that jointly leverages two complementary priors: (i) a contrastive ranking loss to model inter-slice continuous anatomical similarity, and (ii) a sequential consistency constraint to explicitly encode the ordered, population-level anatomical evolution pattern. CORAL integrates local relational and global structural priors within an unsupervised pretraining paradigm. Extensive experiments demonstrate that CORAL achieves state-of-the-art segmentation performance across multiple few-shot benchmarks and learns anatomically interpretable, semantically meaningful feature representations.

Technology Category

Application Category

📝 Abstract
Volumetric medical image segmentation presents unique challenges due to the inherent anatomical structure and limited availability of annotations. While recent methods have shown promise by contrasting spatial relationships between slices, they rely on hard binary thresholds to define positive and negative samples, thereby discarding valuable continuous information about anatomical similarity. Moreover, these methods overlook the global directional consistency of anatomical progression, resulting in distorted feature spaces that fail to capture the canonical anatomical manifold shared across patients. To address these limitations, we propose Coordinative Ordinal-Relational Anatomical Learning (CORAL) to capture both local and global structure in volumetric images. First, CORAL employs a contrastive ranking objective to leverage continuous anatomical similarity, ensuring relational feature distances between slices are proportional to their anatomical position differences. In addition, CORAL incorporates an ordinal objective to enforce global directional consistency, aligning the learned feature distribution with the canonical anatomical progression across patients. Learning these inter-slice relationships produces anatomically informed representations that benefit the downstream segmentation task. Through this coordinative learning framework, CORAL achieves state-of-the-art performance on benchmark datasets under limited-annotation settings while learning representations with meaningful anatomical structure. Code is available at https://github.com/haoyiwang25/CORAL.
Problem

Research questions and friction points this paper is trying to address.

Addressing limited annotations in volumetric medical image segmentation
Capturing continuous anatomical similarity beyond binary thresholds
Enforcing global directional consistency in anatomical progression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive ranking objective leverages continuous anatomical similarity
Ordinal objective enforces global directional consistency
Coordinative learning framework captures local and global structure
🔎 Similar Papers
No similar papers found.