Co-Seg: Mutual Prompt-Guided Collaborative Learning for Tissue and Nuclei Segmentation

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods treat tissue semantic segmentation and nuclear instance segmentation as independent tasks, neglecting their inherent pathological interdependencies and thereby limiting holistic histopathological understanding. To address this, we propose Co-Seg, the first framework enabling *cooperative segmentation* of tissue regions and nuclei. Co-Seg introduces a Region-Aware Prompt Encoder (RP-Encoder) that jointly generates semantic- and instance-level prompts, and a Mutual-Prompt Mask Decoder (MP-Decoder) that establishes a bidirectional guidance mechanism. Leveraging Transformer-based architectures, Co-Seg enforces cross-task contextual consistency during joint optimization. Evaluated on the PUMA dataset, Co-Seg achieves state-of-the-art performance across all three tasks—tissue semantic segmentation, nuclear instance segmentation, and panoptic segmentation—demonstrating the efficacy and advancement of unified modeling for comprehensive histopathological image analysis.

Technology Category

Application Category

📝 Abstract
Histopathology image analysis is critical yet challenged by the demand of segmenting tissue regions and nuclei instances for tumor microenvironment and cellular morphology analysis. Existing studies focused on tissue semantic segmentation or nuclei instance segmentation separately, but ignored the inherent relationship between these two tasks, resulting in insufficient histopathology understanding. To address this issue, we propose a Co-Seg framework for collaborative tissue and nuclei segmentation. Specifically, we introduce a novel co-segmentation paradigm, allowing tissue and nuclei segmentation tasks to mutually enhance each other. To this end, we first devise a region-aware prompt encoder (RP-Encoder) to provide high-quality semantic and instance region prompts as prior constraints. Moreover, we design a mutual prompt mask decoder (MP-Decoder) that leverages cross-guidance to strengthen the contextual consistency of both tasks, collaboratively computing semantic and instance segmentation masks. Extensive experiments on the PUMA dataset demonstrate that the proposed Co-Seg surpasses state-of-the-arts in the semantic, instance and panoptic segmentation of tumor tissues and nuclei instances. The source code is available at https://github.com/xq141839/Co-Seg.
Problem

Research questions and friction points this paper is trying to address.

Collaborative segmentation of tissue regions and nuclei instances
Mutual enhancement between tissue and nuclei segmentation tasks
Addressing insufficient histopathology understanding through cross-guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutual prompt-guided collaborative learning paradigm
Region-aware prompt encoder for prior constraints
Mutual prompt mask decoder for cross-guidance
🔎 Similar Papers
No similar papers found.
Q
Qing Xu
University of Lincoln, Brayford Pool, UK
Wenting Duan
Wenting Duan
University of Lincoln
computer visionimage processingmedical imaging
Z
Zhen Chen
Yale University, New Haven CT 06510, USA