Unsupervised Causal Prototypical Networks for De-biased Interpretable Dermoscopy Diagnosis

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of spurious correlations in deep learning–based dermoscopic image diagnosis, where models often exploit environmental confounders due to dataset selection bias, leading to interpretable prototypes that contain misleading evidence. To mitigate this, we propose CausalProto, a causal prototype network that leverages structural causal models and information bottleneck constraints to orthogonally disentangle pathological features from environmental confounders in an unsupervised manner. By reformulating causal intervention via do-calculus as an efficient expectation pooling operation in the prototype space, CausalProto achieves high-purity, debiased interpretability without supervision—a first in unsupervised diagnostic settings. Experiments across multiple dermoscopic datasets demonstrate that CausalProto simultaneously attains superior diagnostic accuracy and interpretability, significantly outperforming conventional black-box models.

Technology Category

Application Category

📝 Abstract
Despite the success of deep learning in dermoscopy image analysis, its inherent black-box nature hinders clinical trust, motivating the use of prototypical networks for case-based visual transparency. However, inevitable selection bias in clinical data often drives these models toward shortcut learning, where environmental confounders are erroneously encoded as predictive prototypes, generating spurious visual evidence that misleads medical decision-making. To mitigate these confounding effects, we propose CausalProto, an Unsupervised Causal Prototypical Network that fundamentally purifies the visual evidence chain. Framed within a Structural Causal Model, we employ an Information Bottleneck-constrained encoder to enforce strict unsupervised orthogonal disentanglement between pathological features and environmental confounders. By mapping these decoupled representations into independent prototypical spaces, we leverage the learned spurious dictionary to perform backdoor adjustment via do-calculus, transforming complex causal interventions into efficient expectation pooling to marginalize environmental noise. Extensive experiments on multiple dermoscopy datasets demonstrate that CausalProto achieves superior diagnostic performance and consistently outperforms standard black box models, while simultaneously providing transparent and high purity visual interpretability without suffering from the traditional accuracy compromise.
Problem

Research questions and friction points this paper is trying to address.

selection bias
confounding factors
prototypical networks
interpretable diagnosis
dermoscopy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Inference
Prototypical Networks
Unsupervised Disentanglement
De-biased Learning
Interpretable AI
🔎 Similar Papers
No similar papers found.
Junhao Jia
Junhao Jia
Hangzhou Dianzi University
Explainable AI (XAI)Interpretable Computer VisionMedical Image Analysis
Y
Yueyi Wu
Hangzhou Dianzi University
H
Huangwei Chen
Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science and Technology, Zhejiang University; Hangzhou Dianzi University
H
Haodong Jing
Xi’an Jiaotong University
Haishuai Wang
Haishuai Wang
Harvard University
Data MiningMachine Learning
J
Jiajun Bu
Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science and Technology, Zhejiang University
Lei Wu
Lei Wu
Zhejiang University
Blockchain SecuritySystem Security