No Masks Needed: Explainable AI for Deriving Segmentation from Classification

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image segmentation faces challenges of poor generalizability and limited interpretability in unsupervised settings. To address this, we propose a novel unsupervised segmentation framework that requires no pixel-level annotations. Our method leverages pre-trained classification models and interpretable AI techniques—specifically, gradient-weighted class activation mapping (Grad-CAM)—to generate pixel-level saliency maps, which serve as pseudo-supervisory signals for end-to-end unsupervised transfer from classification to segmentation. Unlike conventional unsupervised segmentation approaches reliant on reconstruction or consistency constraints, our framework eliminates such dependencies, thereby enhancing both cross-domain generalizability and decision interpretability. Evaluated on three public benchmarks—CBIS-DDSM, NuInsSeg, and Kvasir-SEG—our method consistently outperforms state-of-the-art unsupervised and weakly supervised baselines in Dice coefficient and other key metrics. This work establishes a new paradigm for clinically trustworthy AI in medical image analysis.

Technology Category

Application Category

📝 Abstract
Medical image segmentation is vital for modern healthcare and is a key element of computer-aided diagnosis. While recent advancements in computer vision have explored unsupervised segmentation using pre-trained models, these methods have not been translated well to the medical imaging domain. In this work, we introduce a novel approach that fine-tunes pre-trained models specifically for medical images, achieving accurate segmentation with extensive processing. Our method integrates Explainable AI to generate relevance scores, enhancing the segmentation process. Unlike traditional methods that excel in standard benchmarks but falter in medical applications, our approach achieves improved results on datasets like CBIS-DDSM, NuInsSeg and Kvasir-SEG.
Problem

Research questions and friction points this paper is trying to address.

Develops explainable AI for medical image segmentation
Fine-tunes pre-trained models for medical imaging
Improves segmentation accuracy on medical datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes pre-trained models for medical images
Uses Explainable AI for relevance scores
Achieves improved results on medical datasets
🔎 Similar Papers
No similar papers found.